Faster Sparse Matrix Inversion and Rank Computation in Finite Fields

Authors Sílvia Casacuberta, Rasmus Kyng



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2022.33.pdf
  • Filesize: 0.78 MB
  • 24 pages

Document Identifiers

Author Details

Sílvia Casacuberta
  • Harvard University, Cambridge, MA, USA
Rasmus Kyng
  • ETH Zürich, Switzerland

Acknowledgements

We are thankful to Richard Peng and Markus Püschel for helpful suggestions and comments.

Cite As Get BibTex

Sílvia Casacuberta and Rasmus Kyng. Faster Sparse Matrix Inversion and Rank Computation in Finite Fields. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 33:1-33:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022) https://doi.org/10.4230/LIPIcs.ITCS.2022.33

Abstract

We improve the current best running time value to invert sparse matrices over finite fields, lowering it to an expected O(n^{2.2131}) time for the current values of fast rectangular matrix multiplication. We achieve the same running time for the computation of the rank and nullspace of a sparse matrix over a finite field. This improvement relies on two key techniques. First, we adopt the decomposition of an arbitrary matrix into block Krylov and Hankel matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the explicit inverse of a block Hankel matrix using low displacement rank techniques for structured matrices and fast rectangular matrix multiplication algorithms. We generalize our inversion method to block structured matrices with other displacement operators and strengthen the best known upper bounds for explicit inversion of block Toeplitz-like and block Hankel-like matrices, as well as for explicit inversion of block Vandermonde-like matrices with structured blocks. As a further application, we improve the complexity of several algorithms in topological data analysis and in finite group theory.

Subject Classification

ACM Subject Classification
  • Theory of computation → Design and analysis of algorithms
Keywords
  • Matrix inversion
  • rank computation
  • displacement operators
  • numerical linear algebra

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Josh Alman and Virginia Vassilevska Williams. A refined laser method and faster matrix multiplication. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 522-539. SIAM, 2021. Google Scholar
  2. Bernhard Beckermann and George Labahn. A uniform approach for the fast computation of matrix-type Padé approximants. SIAM Journal on Matrix Analysis and Applications, 15(3):804-823, 1994. Publisher: SIAM. Google Scholar
  3. Robert R. Bitmead and Brian DO Anderson. Asymptotically fast solution of Toeplitz and related systems of linear equations. Linear Algebra and its Applications, 34:103-116, 1980. Publisher: Elsevier. Google Scholar
  4. Nicholas Bonello, Sheng Chen, and Lajos Hanzo. Low-density parity-check codes and their rateless relatives. IEEE Communications Surveys & Tutorials, 13(1):3-26, 2010. Publisher: IEEE. Google Scholar
  5. Alin Bostan, Claude-Pierre Jeannerod, and Éric Schost. Solving Toeplitz- and Van-dermonde-like linear systems with large displacement rank. In Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation (ISSAC), pages 33-40, 2007. Google Scholar
  6. David G. Cantor and Erich Kaltofen. On fast multiplication of polynomials over arbitrary algebras. Acta Informatica, 28:693-701, 1991. Google Scholar
  7. Chao Chen and Michael Kerber. An output-sensitive algorithm for persistent homology. Computational Geometry, 46(4):435-447, 2013. Publisher: Elsevier. Google Scholar
  8. Michael B. Cohen, Yin Tat Lee, and Zhao Song. Solving linear programs in the current matrix multiplication time. Journal of the ACM (JACM), 68(1):1-39, 2021. Publisher: ACM New York, NY, USA. Google Scholar
  9. Henry Cohn and Christopher Umans. Fast matrix multiplication using coherent configurations. In Proceedings of the 24th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1074-1087. SIAM, 2013. Google Scholar
  10. Don Coppersmith. Solving homogeneous linear equations over GF(2) via block Wiedemann algorithm. Mathematics of Computation, 62(205):333-350, 1994. Google Scholar
  11. Don Coppersmith and Shmuel Winograd. On the asymptotic complexity of matrix multiplication. SIAM Journal on Computing, 11(3):472-492, 1982. Publisher: SIAM. Google Scholar
  12. James Demmel, Ioana Dumitriu, Olga Holtz, and Robert Kleinberg. Fast matrix multiplication is stable. Numerische Mathematik, 106(2):199-224, 2007. Publisher: Springer. Google Scholar
  13. John D. Dixon. Exact solution of linear equations using p-adic expansions. Numerische Mathematik, 40(1):137-141, 1982. Publisher: Springer. Google Scholar
  14. Wayne Eberly, Mark Giesbrecht, Pascal Giorgi, Arne Storjohann, and Gilles Villard. Solving sparse rational linear systems. In Proceedings of the 2006 International Symposium on Symbolic and Algebraic Computation (ISSAC), pages 63-70, 2006. Google Scholar
  15. Wayne Eberly, Mark Giesbrecht, Pascal Giorgi, Arne Storjohann, and Gilles Villard. Faster inversion and other black box matrix computations using efficient block projections. In Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation (ISSAC), pages 143-150, 2007. Google Scholar
  16. Herbert Edelsbrunner and John Harer. Persistent homology - a survey. Contemporary Mathematics, 453:257-282, 2008. Publisher: Providence, RI: American Mathematical Society. Google Scholar
  17. Herbert Edelsbrunner, David Letscher, and Afra Zomorodian. Topological persistence and simplification. In Proceedings 41st Annual Symposium on Foundations of Computer Science (FOCS), pages 454-463. IEEE, 2000. Google Scholar
  18. François Le Gall. Powers of tensors and fast matrix multiplication. In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC), pages 296-303, 2014. Google Scholar
  19. François Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers of the Coppersmith-Winograd tensor. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1029-1046. SIAM, 2018. Google Scholar
  20. Mark Giesbrecht, Armin Jamshidpey, and Éric Schost. Subquadratic-Time Algorithms for Normal Bases. arXiv preprint arXiv:2005.03497, 2020. Google Scholar
  21. Pascal Giorgi, Claude-Pierre Jeannerod, and Gilles Villard. On the complexity of polynomial matrix computations. In Proceedings of the 2003 International Symposium on Symbolic and Algebraic Computation (ISSAC), pages 135-142, 2003. Google Scholar
  22. Israel Gohberg and Georg Heinig. Inversion of finite Toeplitz matrices made of elements of a non-commutative algebra. Rev. Roumaine Math. Pures Appl., XIX(5):623-663, 1974. Google Scholar
  23. Israel Gohberg and Naum Ya Krupnik. A formula for the inversion of finite Toeplitz matrices. Mat. Issled., 7(2):272-283, 1972. Google Scholar
  24. Israel Gohberg and Arkadii Semencul. On the inversion of finite Toeplitz matrices and their continuous analogs. Mat. Issled., 7(12):201-233, 1972. Google Scholar
  25. Magnus Rudolph Hestenes and Eduard Stiefel. Methods of conjugate gradients for solving linear systems, volume 49. NBS Washington, DC, 1952. Google Scholar
  26. Xiaohan Huang and Victor Y. Pan. Fast rectangular matrix multiplication and applications. Journal of Complexity, 14(2):257-299, 1998. Publisher: Elsevier. Google Scholar
  27. Yufan Huang and Richard Peng. Laplacians are Complete for Linear System over Zp, 2020. Google Scholar
  28. Ted Hurley. Group rings and rings of matrices. Int. J. Pure Appl. Math, 31(3):319-335, 2006. Google Scholar
  29. Edmund Jonckheere and Chingwo Ma. A simple Hankel interpretation of the Berle-kamp-Massey algorithm. Linear Algebra and its Applications, 125:65-76, 1989. Publisher: Elsevier. Google Scholar
  30. Antoine Joux and Cécile Pierrot. Nearly sparse linear algebra and application to discrete logarithms computations. In Contemporary Developments in Finite Fields and Applications, pages 119-144. World Scientific, 2016. Google Scholar
  31. Thomas Kailath, Sun-Yuan Kung, and Martin Morf. Displacement ranks of matrices and linear equations. Journal of Mathematical Analysis and Applications, 68(2):395-407, 1979. Publisher: Elsevier. Google Scholar
  32. Erich Kaltofen. Analysis of Coppersmith’s block Wiedemann algorithm for the parallel solution of sparse linear systems. Mathematics of Computation, 64(210):777-806, 1995. Google Scholar
  33. Erich Kaltofen and B. David Saunders. On Wiedemann’s method of solving sparse linear systems. In International Symposium on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes, pages 29-38. Springer, 1991. Google Scholar
  34. Rasmus Kyng, Di Wang, and Peng Zhang. Packing LPs are hard to solve accurately, assuming linear equations are hard. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 279-296. SIAM, 2020. Google Scholar
  35. Rasmus Kyng and Peng Zhang. Hardness Results for Structured Linear Systems. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 684-695. IEEE, 2017. Google Scholar
  36. George Labahn and Stan Cabay. Matrix Padé fractions and their computation. SIAM Journal on Computing, 18(4):639-657, 1989. Publisher: SIAM. Google Scholar
  37. George Labahn, Dong Koo Choi, and Stan Cabay. The inverses of block Hankel and block Toeplitz matrices. SIAM Journal on Computing, 19(1):98-123, 1990. Publisher: SIAM. Google Scholar
  38. Brian A. LaMacchia and Andrew M. Odlyzko. Solving large sparse linear systems over finite fields. In Conference on the Theory and Application of Cryptography, pages 109-133. Springer, 1990. Google Scholar
  39. Cornelius Lanczos. An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. United States Governm. Press Office Los Angeles, CA, 1950. Google Scholar
  40. Nikola Milosavljević, Dmitriy Morozov, and Primoz Skraba. Zigzag persistent homology in matrix multiplication time. In Proceedings of the 27th Annual Symposium on Computational Geometry, pages 216-225, 2011. Google Scholar
  41. Cameron Musco, Christopher Musco, and Aaron Sidford. Stability of the Lanczos method for matrix function approximation. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1605-1624. SIAM, 2018. Google Scholar
  42. Zipei Nie. Matrix anti-concentration inequalities with applications. arXiv preprint arXiv:2111.05553, 2021. Google Scholar
  43. Vadim Olshevsky and Amin Shokrollahi. A displacement approach to efficient decoding of algebraic-geometric codes. In Proceedings of the 31st annual ACM Symposium on Theory of Computing (STOC), pages 235-244, 1999. Google Scholar
  44. Peter J. Olver. On multivariate interpolation. Studies in Applied Mathematics, 116(2):201-240, 2006. Publisher: Wiley Online Library. Google Scholar
  45. Nina Otter, Mason A. Porter, Ulrike Tillmann, Peter Grindrod, and Heather A. Harrington. A roadmap for the computation of persistent homology. EPJ Data Science, 6:1-38, 2017. Publisher: Springer. Google Scholar
  46. Victor Pan. New fast algorithms for matrix operations. SIAM Journal on Computing, 9(2):321-342, 1980. Publisher: SIAM. Google Scholar
  47. Victor Pan. On computations with dense structured matrices. Mathematics of Computation, 55(191):179-190, 1990. Google Scholar
  48. Victor Pan. Structured matrices and polynomials: unified superfast algorithms. Springer Science & Business Media, 2001. Google Scholar
  49. Richard Peng and Santosh Vempala. Solving sparse linear systems faster than matrix multiplication. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 504-521. SIAM, 2021. Google Scholar
  50. Yousef Saad. Iterative methods for sparse linear systems. SIAM, 2003. Google Scholar
  51. Daniel A. Spielman and Shang-Hua Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM Journal on Matrix Analysis and Applications, 35(3):835-885, 2014. Publisher: SIAM. Google Scholar
  52. Arne Storjohann. The shifted number system for fast linear algebra on integer matrices. Journal of Complexity, 21(4):609-650, 2005. Publisher: Elsevier. Google Scholar
  53. Volker Strassen. Gaussian elimination is not optimal. Numerische Mathematik, 13(4):354-356, 1969. Publisher: Springer. Google Scholar
  54. Volker Strassen. The asymptotic spectrum of tensors and the exponent of matrix multiplication. In 27th Annual Symposium on Foundations of Computer Science (FOCS 1986), pages 49-54. IEEE, 1986. Google Scholar
  55. William F. Trench. An algorithm for the inversion of finite Toeplitz matrices. Journal of the Society for Industrial and Applied Mathematics, 12(3):515-522, 1964. Publisher: SIAM. Google Scholar
  56. Virginia Vassilevska Williams. Multiplying matrices faster than Coppersmith-Wino-grad. In Proceedings of the 44th annual ACM Symposium on Theory of Computing (STOC), pages 887-898, 2012. Google Scholar
  57. Douglas Wiedemann. Solving sparse linear equations over finite fields. IEEE Transactions on Information Theory, 32(1):54-62, 1986. Publisher: IEEE. Google Scholar
  58. James Hardy Wilkinson. Error analysis of direct methods of matrix inversion. Journal of the ACM (JACM), 8(3):281-330, 1961. Publisher: ACM New York, NY, USA. Google Scholar
  59. Afra Zomorodian and Gunnar Carlsson. Computing persistent homology. Discrete & Computational Geometry, 33(2):249-274, 2005. Publisher: Springer. Google Scholar
  60. Uri Zwick. All pairs shortest paths using bridging sets and rectangular matrix multiplication. Journal of the ACM (JACM), 49(3):289-317, 2002. Publisher: ACM New York, NY, USA. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail