Further Limitations of the Known Approaches for Matrix Multiplication

Authors Josh Alman, Virginia Vassilevska Williams

Thumbnail PDF


  • Filesize: 0.53 MB
  • 15 pages

Document Identifiers

Author Details

Josh Alman
Virginia Vassilevska Williams

Cite AsGet BibTex

Josh Alman and Virginia Vassilevska Williams. Further Limitations of the Known Approaches for Matrix Multiplication. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 25:1-25:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


We consider the techniques behind the current best algorithms for matrix multiplication. Our results are threefold. (1) We provide a unifying framework, showing that all known matrix multiplication running times since 1986 can be achieved from a single very natural tensor - the structural tensor T_q of addition modulo an integer q. (2) We show that if one applies a generalization of the known techniques (arbitrary zeroing out of tensor powers to obtain independent matrix products in order to use the asymptotic sum inequality of Schönhage) to an arbitrary monomial degeneration of T_q, then there is an explicit lower bound, depending on q, on the bound on the matrix multiplication exponent omega that one can achieve. We also show upper bounds on the value alpha that one can achieve, where alpha is such that n * n^alpha * n matrix multiplication can be computed in n^{2+o(1)} time. (3) We show that our lower bound on omega approaches 2 as q goes to infinity. This suggests a promising approach to improving the bound on omega: for variable q, find a monomial degeneration of T_q which, using the known techniques, produces an upper bound on omega as a function of q. Then, take q to infinity. It is not ruled out, and hence possible, that one can obtain omega=2 in this way.
  • matrix multiplication
  • lower bound
  • monomial degeneration
  • structural tensor of addition mod p


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Andris Ambainis, Yuval Filmus, and François Le Gall. Fast matrix multiplication: limitations of the coppersmith-winograd method. In STOC, pages 585-593, 2015. Google Scholar
  2. D. Bini, M. Capovani, F. Romani, and G. Lotti. O(n^2.7799) complexity for n× n approximate matrix multiplication. Inf. Process. Lett., 8(5):234-235, 1979. Google Scholar
  3. Markus Bläser. Fast matrix multiplication. Theory of Computing, Graduate Surveys, 5:1-60, 2013. Google Scholar
  4. Jonah Blasiak, Thomas Church, Henry Cohn, Joshua A Grochow, Eric Naslund, William F Sawin, and Chris Umans. On cap sets and the group-theoretic approach to matrix multiplication. Discrete Analysis, 2017(3):1-27, 2017. Google Scholar
  5. Henry Cohn. personal communication, 2017. Google Scholar
  6. Henry Cohn, Robert Kleinberg, Balazs Szegedy, and Christopher Umans. Group-theoretic algorithms for matrix multiplication. In FOCS, pages 379-388, 2005. Google Scholar
  7. Henry Cohn and Christopher Umans. A group-theoretic approach to fast matrix multiplication. In FOCS, pages 438-449, 2003. Google Scholar
  8. D. Coppersmith and S. Winograd. On the asymptotic complexity of matrix multiplication. In SFCS, pages 82-90, 1981. Google Scholar
  9. Don Coppersmith. Rectangular matrix multiplication revisited. Journal of Complexity, 13(1):42-49, 1997. Google Scholar
  10. Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of symbolic computation, 9(3):251-280, 1990. Google Scholar
  11. A.M. Davie and A. J. Stothers. Improved bound for complexity of matrix multiplication. Proceedings of the Royal Society of Edinburgh, Section: A Mathematics, 143:351-369, 4 2013. Google Scholar
  12. Jordan S Ellenberg and Dion Gijswijt. On large subsets of 𝔽_qⁿ with no three-term arithmetic progression. Annals of Mathematics, 185(1):339-343, 2017. Google Scholar
  13. François Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers of the coppersmith-winograd tensor. CoRR, abs/1708.05622, 2017. URL: http://arxiv.org/abs/1708.05622.
  14. Robert Kleinberg, William F. Sawin, and David E. Speyer. The growth rate of tri-colored sum-free sets. math, abs/1607.00047, 2016. URL: http://arxiv.org/abs/1607.00047.
  15. François Le Gall. Faster algorithms for rectangular matrix multiplication. In FOCS, pages 514-523, 2012. Google Scholar
  16. François Le Gall. Powers of tensors and fast matrix multiplication. In ISSAC, pages 296-303, 2014. Google Scholar
  17. Mateusz Michalek. personal communication, 2014. Google Scholar
  18. Sergey Norin. A distribution on triples with maximum entropy marginal. math, abs/1608.00243, 2016. URL: http://arxiv.org/abs/1608.00243.
  19. V. Y. Pan. Strassen’s algorithm is not optimal. In FOCS, volume 19, pages 166-176, 1978. Google Scholar
  20. V. Y. Pan. New fast algorithms for matrix operations. SIAM J. Comput., 9(2):321-342, 1980. Google Scholar
  21. Luke Pebody. Proof of a conjecture of kleinberg-sawin-speyer. math, abs/1608.05740, 2016. URL: http://arxiv.org/abs/1608.05740.
  22. Raphaël Salem and Donald C Spencer. On sets of integers which contain no three terms in arithmetical progression. Proceedings of the National Academy of Sciences, 28(12):561-563, 1942. Google Scholar
  23. A. Schönhage. Partial and total matrix multiplication. SIAM J. Comput., 10(3):434-455, 1981. Google Scholar
  24. V. Strassen. The asymptotic spectrum of tensors and the exponent of matrix multiplication. In FOCS, pages 49-54, 1986. Google Scholar
  25. V. Strassen. Relative bilinear complexity and matrix multiplication. J. reine angew. Math. (Crelles Journal), 375-376:406-443, 1987. Google Scholar
  26. Volker Strassen. Gaussian elimination is not optimal. Numerische Mathematik, 13(4):354-356, 1969. Google Scholar
  27. Virginia Vassilevska Williams. Multiplying matrices faster than coppersmith-winograd. In STOC, pages 887-898, 2012. Google Scholar