The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants

Authors Emile Anand, Jan van den Brand , Mehrdad Ghadiri, Daniel J. Zhang



PDF
Thumbnail PDF

File

LIPIcs.ICALP.2024.10.pdf
  • Filesize: 0.8 MB
  • 20 pages

Document Identifiers

Author Details

Emile Anand
  • Caltech, Pasadena, CA, USA
Jan van den Brand
  • Georgia Tech, Atlanta, GA, USA
Mehrdad Ghadiri
  • MIT, Cambridge, MA, USA
Daniel J. Zhang
  • Georgia Tech, Atlanta, GA, USA

Acknowledgements

The authors would like to thank Richard Peng for advice and comments.

Cite AsGet BibTex

Emile Anand, Jan van den Brand, Mehrdad Ghadiri, and Daniel J. Zhang. The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/LIPIcs.ICALP.2024.10

Abstract

Many iterative algorithms in computer science require repeated computation of some algebraic expression whose input varies slightly from one iteration to the next. Although efficient data structures have been proposed for maintaining the solution of such algebraic expressions under low-rank updates, most of these results are only analyzed under exact arithmetic (real-RAM model and finite fields) which may not accurately reflect the more limited complexity guarantees of real computers. In this paper, we analyze the stability and bit complexity of such data structures for expressions that involve the inversion, multiplication, addition, and subtraction of matrices under the word-RAM model. We show that the bit complexity only increases linearly in the number of matrix operations in the expression. In addition, we consider the bit complexity of maintaining the determinant of a matrix expression. We show that the required bit complexity depends on the logarithm of the condition number of matrices instead of the logarithm of their determinant. Finally, we discuss rank maintenance and its connections to determinant maintenance. Our results have wide applications ranging from computational geometry (e.g., computing the volume of a polytope) to optimization (e.g., solving linear programs using the simplex algorithm).

Subject Classification

ACM Subject Classification
  • Theory of computation → Data structures design and analysis
Keywords
  • Data Structures
  • Online Algorithms
  • Bit Complexity

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Deeksha Adil, Rasmus Kyng, Richard Peng, and Sushant Sachdeva. Iterative Refinement for 𝓁_p-norm Regression. In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 1405-1424. SIAM, 2019. URL: https://doi.org/10.1137/1.9781611975482.86.
  2. Deeksha Adil, Richard Peng, and Sushant Sachdeva. Fast, Provably convergent IRLS algorithm for p-norm Linear Regression. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 14166-14177, 2019. URL: http://papers.nips.cc/paper/9565-fast-provably-convergent-irls-algorithm-for-p-norm-linear-regression.
  3. Deeksha Adil and Sushant Sachdeva. Faster p-norm minimizing flows, via smoothed q-norm problems. In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 892-910. SIAM, 2020. URL: https://doi.org/10.1137/1.9781611975994.54.
  4. Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett. The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues. In Proceedings of the 50th annual acm sigact symposium on theory of computing, pages 587-597, 2018. URL: https://theoryofcomputing.org/articles/v015a021/v015a021.pdf.
  5. Peter A. Beling and Nimrod Megiddo. Using fast matrix multiplication to find basic solutions. Theoretical Computer Science, 205(1):307-316, 1998. URL: https://doi.org/10.1016/S0304-3975(98)00003-6.
  6. Thiago Bergamaschi, Monika Henzinger, Maximilian Probst Gutenberg, Virginia Vassilevska Williams, and Nicole Wein. New techniques and fine-grained hardness for dynamic near-additive spanners. In SODA, pages 1836-1855. SIAM, 2021. URL: https://epubs.siam.org/doi/10.1137/1.9781611976465.110.
  7. Jan van den Brand. Complexity term balancer. URL: https://www.ocf.berkeley.edu/~vdbrand/complexity/. Tool to balance complexity terms depending on fast matrix multiplication.
  8. Jan van den Brand. A deterministic linear program solver in current matrix multiplication time. In SODA, pages 259-278. SIAM, 2020. URL: https://dl.acm.org/doi/abs/10.5555/3381089.3381105.
  9. Jan van den Brand. Unifying matrix data structures: Simplifying and speeding up iterative algorithms. In Symposium on Simplicity in Algorithms (SOSA), pages 1-13. SIAM, 2021. https://arxiv.org/abs/2010.13888.
  10. Jan van den Brand, Yin Tat Lee, Aaron Sidford, and Zhao Song. Solving tall dense linear programs in nearly linear time. In Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 22-26, 2020, pages 775-788. ACM, 2020. URL: https://doi.org/10.1145/3357713.3384309.
  11. Jan van den Brand and Danupon Nanongkai. Dynamic approximate shortest paths and beyond: Subquadratic and worst-case update time. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 436-455. IEEE, 2019. URL: https://arxiv.org/abs/1909.10850.
  12. Jan van den Brand, Danupon Nanongkai, and Thatchaphol Saranurak. Dynamic matrix inverse: Improved algorithms and matching conditional lower bounds. In FOCS, pages 456-480. IEEE Computer Society, 2019. https://arxiv.org/abs/1905.05067.
  13. Sébastien Bubeck, Michael B Cohen, Yin Tat Lee, and Yuanzhi Li. An homotopy method for lp regression provably beyond self-concordance and in input-sparsity time. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1130-1137, 2018. https://arxiv.org/abs/1711.01328.
  14. Li Chen, Rasmus Kyng, Yang P Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva. Maximum flow and minimum-cost flow in almost-linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 612-623. IEEE, 2022. URL: https://ieeexplore.ieee.org/document/9996881.
  15. Michael B. Cohen, Yin Tat Lee, and Zhao Song. Solving linear programs in the current matrix multiplication time. J. ACM, 68(1):3:1-3:39, 2021. URL: https://doi.org/10.1145/3424305.
  16. Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, third edition edition, 2009. URL: https://mitpress.mit.edu/9780262046305/introduction-to-algorithms/.
  17. James Demmel, Ioana Dumitriu, and Olga Holtz. Fast linear algebra is stable. Numerische Mathematik, 108(1):59-91, 2007. URL: https://doi.org/10.1007/s00211-007-0114-x.
  18. Huaian Diao, Rajesh Jayaram, Zhao Song, Wen Sun, and David Woodruff. Optimal sketching for kronecker product regression and low rank approximation. Advances in neural information processing systems, 32, 2019. Google Scholar
  19. Alan Edelman. Eigenvalues and condition numbers of random matrices. SIAM journal on matrix analysis and applications, 9(4):543-560, 1988. Google Scholar
  20. Alan Edelman. Eigenvalues and condition numbers of random matrices. PhD thesis, Massachusetts Institute of Technology, 1989. URL: https://math.mit.edu/~edelman/homepage/papers/Eig.pdf.
  21. Matthew Fahrbach, Gang Fu, and Mehrdad Ghadiri. Subquadratic kronecker regression with applications to tensor decomposition. In Advances in Neural Information Processing Systems, 2022. Google Scholar
  22. Vissarion Fisikopoulos and Luis Mariano Peñaranda. Faster geometric algorithms via dynamic determinant computation. Comput. Geom., 54:1-16, 2016. URL: https://www.sciencedirect.com/science/article/pii/S0925772115001261.
  23. Mehrdad Ghadiri, Yin Tat Lee, Swati Padmanabhan, William Swartworth, David Woodruff, and Guanghao Ye. Improving the bit complexity of communication for distributed convex optimization. In 56th Annual ACM SIGACT Symposium on Theory of Computing (STOC), 2024. Google Scholar
  24. Mehrdad Ghadiri, Richard Peng, and Santosh Vempala. The bit complexity of efficient continuous optimization. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2023. URL: https://www.computer.org/csdl/proceedings-article/focs/2023/189400c059/1T9796LmQ80.
  25. Christopher Harshaw, Fredrik Sävje, Daniel Spielman, and Peng Zhang. Balancing covariates in randomized experiments with the gram-schmidt walk design. arXiv preprint, 2019. URL: https://arxiv.org/abs/1911.03071.
  26. Baihe Huang, Shunhua Jiang, Zhao Song, Runzhou Tao, and Ruizhe Zhang. Solving SDP faster: A robust ipm framework and efficient implementation. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 233-244. IEEE, 2022. URL: https://www.computer.org/csdl/proceedings-article/focs/2022/551900a233/1JtvWgBUn8A.
  27. Haotian Jiang, Tarun Kathuria, Yin Tat Lee, Swati Padmanabhan, and Zhao Song. A faster interior point method for semidefinite programming. In 2020 IEEE 61st annual symposium on foundations of computer science (FOCS), pages 910-918. IEEE, 2020. URL: https://ieeexplore.ieee.org/document/9317892.
  28. Shunhua Jiang, Binghui Peng, and Omri Weinstein. The complexity of dynamic least-squares regression. In FOCS, 2023. URL: https://www.computer.org/csdl/proceedings-article/focs/2023/189400b605/1T972gjnp4I.
  29. Shunhua Jiang, Zhao Song, Omri Weinstein, and Hengjie Zhang. A faster algorithm for solving general LPs. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 823-832, 2021. Google Scholar
  30. Yin Tat Lee and Aaron Sidford. Path finding I: Solving linear programs with Õ(√rank) linear system solves. arXiv preprint, 2013. URL: https://ieeexplore.ieee.org/document/6979027, URL: https://arxiv.org/abs/1312.6677.
  31. Yiheng Lin, James A Preiss, Emile Timothy Anand, Yingying Li, Yisong Yue, and Adam Wierman. Learning-augmented control via online adaptive policy selection: No regret via contractive perturbations. In ACM SIGMETRICS, Workshop on Learning-augmented Algorithms: Theory and Applications 2023, 2023. URL: https://learning-augmented-algorithms.github.io/papers/sigmetrics23-lata-posters-paper5.pdf.
  32. Yiheng Lin, James A Preiss, Emile Timothy Anand, Yingying Li, Yisong Yue, and Adam Wierman. Online adaptive policy selection in time-varying systems: No-regret via contractive perturbations. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL: https://openreview.net/forum?id=hDajsofjRM.
  33. Piotr Sankowski. Dynamic transitive closure via dynamic matrix inverse (extended abstract). In FOCS, pages 509-517. IEEE Computer Society, 2004. URL: https://ieeexplore.ieee.org/document/1366271.
  34. Piotr Sankowski. Faster dynamic matchings and vertex connectivity. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07, pages 118-126, USA, 2007. Society for Industrial and Applied Mathematics. URL: https://dl.acm.org/doi/10.5555/1283383.1283397.
  35. Terence Tao and Van Vu. On random ± 1 matrices: singularity and determinant. In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pages 431-440, 2005. URL: https://dl.acm.org/doi/10.1145/1060590.1060655.
  36. Santosh S Vempala, Ruosong Wang, and David P Woodruff. The communication complexity of optimization. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1733-1752. SIAM, 2020. Google Scholar
  37. Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. New bounds for matrix multiplication: from alpha to omega. In SODA, pages 3792-3835. SIAM, 2024. Google Scholar
  38. Max A Woodbury. Inverting modified matrices. Statistical Research Group, 1950. URL: https://books.google.com/books/about/Inverting_Modified_Matrices.html?id=_zAnzgEACAAJ.