Limits on the Universal Method for Matrix Multiplication

Author Josh Alman



PDF
Thumbnail PDF

File

LIPIcs.CCC.2019.12.pdf
  • Filesize: 0.56 MB
  • 24 pages

Document Identifiers

Author Details

Josh Alman
  • MIT CSAIL and EECS, Cambridge, MA, USA

Acknowledgements

I would like to thank Matthias Christandl, Joshua Grochow, Ryan Williams, Virginia Vassilevska Williams, and Jeroen Zuiddam for helpful discussions and suggestions.

Cite As Get BibTex

Josh Alman. Limits on the Universal Method for Matrix Multiplication. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019) https://doi.org/10.4230/LIPIcs.CCC.2019.12

Abstract

In this work, we prove limitations on the known methods for designing matrix multiplication algorithms. Alman and Vassilevska Williams [Alman and Williams, 2018] recently defined the Universal Method, which substantially generalizes all the known approaches including Strassen’s Laser Method [V. Strassen, 1987] and Cohn and Umans' Group Theoretic Method [Cohn and Umans, 2003]. We prove concrete lower bounds on the algorithms one can design by applying the Universal Method to many different tensors. Our proofs use new tools for upper bounding the asymptotic slice rank of a wide range of tensors. Our main result is that the Universal method applied to any Coppersmith-Winograd tensor CW_q cannot yield a bound on omega, the exponent of matrix multiplication, better than 2.16805. By comparison, it was previously only known that the weaker "Galactic Method" applied to CW_q could not achieve an exponent of 2.
We also study the Laser Method (which is, in principle, a highly special case of the Universal Method) and prove that it is "complete" for matrix multiplication algorithms: when it applies to a tensor T, it achieves omega = 2 if and only if it is possible for the Universal method applied to T to achieve omega = 2. Hence, the Laser Method, which was originally used as an algorithmic tool, can also be seen as a lower bounding tool. For example, in their landmark paper, Coppersmith and Winograd [Coppersmith and Winograd, 1990] achieved a bound of omega <= 2.376, by applying the Laser Method to CW_q. By our result, the fact that they did not achieve omega=2 implies a lower bound on the Universal Method applied to CW_q. Indeed, if it were possible for the Universal Method applied to CW_q to achieve omega=2, then Coppersmith and Winograd’s application of the Laser Method would have achieved omega=2.

Subject Classification

ACM Subject Classification
  • Theory of computation → Design and analysis of algorithms
  • Theory of computation → Algebraic complexity theory
Keywords
  • Matrix Multiplication
  • Laser Method
  • Slice Rank

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Josh Alman and Virginia Vassilevska Williams. Further limitations of the known approaches for matrix multiplication. In ITCS, pages 25:1-25:15, 2018. Google Scholar
  2. Josh Alman and Virginia Vassilevska Williams. Limits on all known (and some unknown) approaches to matrix multiplication. In FOCS, pages 580-591, 2018. Google Scholar
  3. Andris Ambainis, Yuval Filmus, and François Le Gall. Fast matrix multiplication: limitations of the Coppersmith-Winograd method. In STOC, pages 585-593, 2015. Google Scholar
  4. Dario Bini. Border rank of ap× q× 2 tensor and the optimal approximation of a pair of bilinear forms. In International Colloquium on Automata, Languages, and Programming, pages 98-108. Springer, 1980. Google Scholar
  5. Jonah Blasiak, Thomas Church, Henry Cohn, Joshua A Grochow, Eric Naslund, William F Sawin, and Chris Umans. On cap sets and the group-theoretic approach to matrix multiplication. Discrete Analysis, 2017(3):1-27, 2017. Google Scholar
  6. Matthias Christandl, Péter Vrana, and Jeroen Zuiddam. Barriers for fast matrix multiplication from irreversibility. arXiv e-prints, page arXiv:1812.06952, December 2018. URL: http://arxiv.org/abs/1812.06952.
  7. Matthias Christandl, Péter Vrana, and Jeroen Zuiddam. Universal points in the asymptotic spectrum of tensors. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 289-296. ACM, 2018. Google Scholar
  8. Michael B Cohen, Yin Tat Lee, and Zhao Song. Solving Linear Programs in the Current Matrix Multiplication Time. arXiv preprint arXiv:1810.07896; to appear in STOC 2019, 2018. URL: http://arxiv.org/abs/1810.07896.
  9. Henry Cohn and Christopher Umans. A group-theoretic approach to fast matrix multiplication. In FOCS, pages 438-449, 2003. Google Scholar
  10. Henry Cohn and Christopher Umans. Fast matrix multiplication using coherent configurations. In Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, pages 1074-1086. Society for Industrial and Applied Mathematics, 2013. Google Scholar
  11. Don Coppersmith and Shmuel Winograd. On the Asymptotic Complexity of Matrix Multiplication. SIAM J. Comput., 11(3):472-492, 1982. Google Scholar
  12. Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of symbolic computation, 9(3):251-280, 1990. Google Scholar
  13. A.M. Davie and A. J. Stothers. Improved bound for complexity of matrix multiplication. Proceedings of the Royal Society of Edinburgh, Section: A Mathematics, 143:351-369, April 2013. Google Scholar
  14. Francois Le Gall and Florent Urrutia. Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor. In SODA, pages 1029-1046, 2018. URL: https://doi.org/10.1137/1.9781611975031.67.
  15. Matti Karppa and Petteri Kaski. Probabilistic Tensors and Opportunistic Boolean Matrix Multiplication. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 496-515. SIAM, 2019. Google Scholar
  16. Robert Kleinberg, Will Sawin, and David Speyer. The growth rate of tri-colored sum-free sets. Discrete Analysis, June 2018. URL: https://doi.org/10.19086/da.3734.
  17. François Le Gall. Faster algorithms for rectangular matrix multiplication. In FOCS, pages 514-523, 2012. Google Scholar
  18. François Le Gall. Powers of tensors and fast matrix multiplication. In ISSAC, pages 296-303, 2014. Google Scholar
  19. A. Schönhage. Partial and Total Matrix Multiplication. SIAM J. Comput., 10(3):434-455, 1981. Google Scholar
  20. V. Strassen. Relative bilinear complexity and matrix multiplication. J. reine angew. Math. (Crelles Journal), 375-376:406-443, 1987. Google Scholar
  21. Volker Strassen. Gaussian elimination is not optimal. Numerische mathematik, 13(4):354-356, 1969. Google Scholar
  22. Volker Strassen. The Asymptotic Spectrum of Tensors and the Exponent of Matrix Multiplication. In FOCS, pages 49-54, 1986. Google Scholar
  23. Volker Strassen. Degeneration and complexity of bilinear maps: some asymptotic spectra. Crelles J. Reine Angew. Math, 413:127-180, 1991. Google Scholar
  24. Terence Tao. A symmetric formulation of the Croot-Lev-Pach-Ellenberg-Gijswijt capset bound. https://terrytao.wordpress.com/2016/05/18/a-symmetric-formulation-of-the-croot-lev-pach-ellenberg-gijswijt-capset-bound/, 2016.
  25. Terence Tao and Will Sawin. Notes on the “slice rank” of tensors. https://terrytao.wordpress.com/2016/08/24/notes-on-the-slice-rank-of-tensors/, 2016.
  26. Virginia Vassilevska Williams. Multiplying matrices faster than Coppersmith-Winograd. In STOC, pages 887-898, 2012. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail