Document

# Optimal Fine-Grained Hardness of Approximation of Linear Equations

## File

LIPIcs.ICALP.2021.20.pdf
• Filesize: 0.77 MB
• 19 pages

## Cite As

Mitali Bafna and Nikhil Vyas. Optimal Fine-Grained Hardness of Approximation of Linear Equations. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 20:1-20:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)
https://doi.org/10.4230/LIPIcs.ICALP.2021.20

## Abstract

The problem of solving linear systems is one of the most fundamental problems in computer science, where given a satisfiable linear system (A,b), for A ∈ ℝ^{n×n} and b ∈ ℝⁿ, we wish to find a vector x ∈ ℝⁿ such that Ax = b. The current best algorithms for solving dense linear systems reduce the problem to matrix multiplication, and run in time O(n^ω). We consider the problem of finding ε-approximate solutions to linear systems with respect to the L₂-norm, that is, given a satisfiable linear system (A ∈ ℝ^{n×n}, b ∈ ℝⁿ), find an x ∈ ℝⁿ such that ||Ax - b||₂ ≤ ε||b||₂. Our main result is a fine-grained reduction from computing the rank of a matrix to finding ε-approximate solutions to linear systems. In particular, if the best known Õ(n^ω) time algorithm for computing the rank of n × O(n) matrices is optimal (which we conjecture is true), then finding an ε-approximate solution to a dense linear system also requires Ω̃(n^ω) time, even for ε as large as (1 - 1/poly(n)). We also prove (under some modified conjectures for the rank-finding problem) optimal hardness of approximation for sparse linear systems, linear systems over positive semidefinite matrices and well-conditioned linear systems. At the heart of our results is a novel reduction from the rank problem to a decision version of the approximate linear systems problem. This reduction preserves properties such as matrix sparsity and bit complexity.

## Subject Classification

##### ACM Subject Classification
• Theory of computation → Problems, reductions and completeness
##### Keywords
• Linear Equations
• Fine-Grained Complexity
• Hardness of Approximation

## Metrics

• Access Statistics
• Total Accesses (updated on a weekly basis)
0

## References

1. Amir Abboud, Aviad Rubinstein, and R. Ryan Williams. Distributed PCP theorems for hardness of approximation in P. In Chris Umans, editor, 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 25-36. IEEE Computer Society, 2017. URL: https://doi.org/10.1109/FOCS.2017.12.
2. Josh Alman and Ryan Williams. Probabilistic polynomials and hamming nearest neighbors. In Venkatesan Guruswami, editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 136-150. IEEE Computer Society, 2015. URL: https://doi.org/10.1109/FOCS.2015.18.
3. Josh Alman and Virginia Vassilevska Williams. A refined laser method and faster matrix multiplication. CoRR, abs/2010.05846, 2020. URL: http://arxiv.org/abs/2010.05846.
4. Alexandr Andoni, Robert Krauthgamer, and Yosef Pogrow. On solving linear systems in sublinear time. In 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, pages 3:1-3:19, 2019.
5. Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy. Proof verification and the hardness of approximation problems. J. ACM, 45(3):501-555, 1998. URL: https://doi.org/10.1145/278298.278306.
6. Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). SIAM J. Comput., 47(3):1087-1097, 2018. URL: https://doi.org/10.1137/15M1053128.
7. Walter Baur and Volker Strassen. The complexity of partial derivatives. Theor. Comput. Sci., 22:317-330, 1983. URL: https://doi.org/10.1016/0304-3975(83)90110-X.
8. Stavros Birmpilis, George Labahn, and Arne Storjohann. Deterministic reduction of integer nonsingular linear system solving to matrix multiplication. In Proceedings of the 2019 on International Symposium on Symbolic and Algebraic Computation, ISSAC 2019, Beijing, China, July 15-18, 2019, pages 58-65, 2019.
9. James R Bunch and John E Hopcroft. Triangular factorization and inversion by fast matrix multiplication. Mathematics of Computation, 28(125):231-236, 1974.
10. Lijie Chen, Shafi Goldwasser, Kaifeng Lyu, Guy N. Rothblum, and Aviad Rubinstein. Fine-grained complexity meets IP = PSPACE. In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 1-20. SIAM, 2019. URL: https://doi.org/10.1137/1.9781611975482.1.
11. Ho Yee Cheung, Tsz Chiu Kwok, and Lap Chi Lau. Fast matrix rank algorithms and applications. J. ACM, 60(5):31:1-31:25, 2013. URL: https://doi.org/10.1145/2528404.
12. Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, and Aaron Sidford. Solving directed laplacian systems in nearly-linear time through sparse LU factorizations. In 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, October 7-9, 2018, pages 898-909, 2018.
13. Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu. Almost-linear-time algorithms for markov chains and new spectral primitives for directed graphs. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 410-419, 2017.
14. Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of symbolic computation, 9(3):251-280, 1990.
15. Wayne Eberly, Mark Giesbrecht, Pascal Giorgi, Arne Storjohann, and Gilles Villard. Faster inversion and other black box matrix computations using efficient block projections. In Symbolic and Algebraic Computation, International Symposium, ISSAC 2007, Waterloo, Ontario, Canada, July 28 - August 1, 2007, Proceedings, pages 143-150, 2007.
16. Alan Edelman. Eigenvalues and condition numbers of random matrices. SIAM journal on matrix analysis and applications, 9(4):543-560, 1988.
17. François Le Gall. Powers of tensors and fast matrix multiplication. In International Symposium on Symbolic and Algebraic Computation, ISSAC '14, Kobe, Japan, July 23-25, 2014, pages 296-303, 2014.
18. Johan Håstad. Some optimal inapproximability results. J. ACM, 48(4):798-859, 2001. URL: https://doi.org/10.1145/502090.502098.
19. Magnus Rudolph Hestenes and Eduard Stiefel. Methods of conjugate gradients for solving linear systems, volume 49. DC: NBS, 1952.
20. Oscar H. Ibarra, Shlomo Moran, and Roger Hui. A generalization of the fast LUP matrix decomposition algorithm and applications. J. Algorithms, 3(1):45-56, 1982. URL: https://doi.org/10.1016/0196-6774(82)90007-4.
21. Erich Kaltofen and B. David Saunders. On wiedemann’s method of solving sparse linear systems. In Harold F. Mattson, Teo Mora, and T. R. N. Rao, editors, Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, 9th International Symposium, AAECC-9, New Orleans, LA, USA, October 7-11, 1991, Proceedings, volume 539 of Lecture Notes in Computer Science, pages 29-38. Springer, 1991. URL: https://doi.org/10.1007/3-540-54522-0_93.
22. Karthik C. S., Bundit Laekhanukit, and Pasin Manurangsi. On the parameterized complexity of approximating dominating set. In Ilias Diakonikolas, David Kempe, and Monika Henzinger, editors, Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 1283-1296. ACM, 2018. URL: https://doi.org/10.1145/3188745.3188896.
23. Rasmus Kyng, Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman. Sparsified cholesky and multigrid solvers for connection laplacians. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 842-850, 2016.
24. Rasmus Kyng, Di Wang, and Peng Zhang. Packing lps are hard to solve accurately, assuming linear equations are hard. In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 279-296. SIAM, 2020. URL: https://doi.org/10.1137/1.9781611975994.17.
25. Rasmus Kyng and Peng Zhang. Hardness results for structured linear systems. In 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 684-695, 2017.
26. László Lovász. On determinants, matchings, and random algorithms. In Fundamentals of Computation Theory, FCT 1979, Proceedings of the Conference on Algebraic, Arthmetic, and Categorial Methods in Computation Theory, Berlin/Wendisch-Rietz, Germany, September 17-21, 1979, pages 565-574, 1979.
27. Marcin Mucha and Piotr Sankowski. Maximum matchings in planar graphs via gaussian elimination. Algorithmica, 45(1):3-20, 2006. URL: https://doi.org/10.1007/s00453-005-1187-5.
28. Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. Spectrum approximation beyond fast matrix multiplication: Algorithms and hardness. In Anna R. Karlin, editor, 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, volume 94 of LIPIcs, pages 8:1-8:21. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018. URL: https://doi.org/10.4230/LIPIcs.ITCS.2018.8.
29. Colton Pauderis and Arne Storjohann. Deterministic unimodularity certification. In Joris van der Hoeven and Mark van Hoeij, editors, International Symposium on Symbolic and Algebraic Computation, ISSAC'12, Grenoble, France - July 22 - 25, 2012, pages 281-288. ACM, 2012. URL: https://doi.org/10.1145/2442829.2442870.
30. Richard Peng and Santosh S. Vempala. Solving sparse linear systems faster than matrix multiplication. In Dániel Marx, editor, Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, SODA 2021, Virtual Conference, January 10 - 13, 2021, pages 504-521. SIAM, 2021. URL: https://doi.org/10.1137/1.9781611976465.31.
31. Liam Roditty and Virginia Vassilevska Williams. Fast approximation algorithms for the diameter and radius of sparse graphs. In Dan Boneh, Tim Roughgarden, and Joan Feigenbaum, editors, Symposium on Theory of Computing Conference, STOC'13, Palo Alto, CA, USA, June 1-4, 2013, pages 515-524. ACM, 2013. URL: https://doi.org/10.1145/2488608.2488673.
32. Daniel A. Spielman and Shang-Hua Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM J. Matrix Analysis Applications, 35(3):835-885, 2014. URL: https://doi.org/10.1137/090771430.
33. Arne Storjohann. The shifted number system for fast linear algebra on integer matrices. J. Complex., 21(4):609-650, 2005. URL: https://doi.org/10.1016/j.jco.2005.04.002.
34. Andrew James Stothers. On the complexity of matrix multiplication, 2010.
35. Volker Strassen. Gaussian elimination is not optimal. Numerische mathematik, 13(4):354-356, 1969.
36. Douglas H. Wiedemann. Solving sparse linear equations over finite fields. IEEE Trans. Inf. Theory, 32(1):54-62, 1986. URL: https://doi.org/10.1109/TIT.1986.1057137.
37. Virginia Vassilevska Williams. Multiplying matrices in o (n2. 373) time, 2014.
38. Virginia Vassilevska Williams. On some fine-grained questions in algorithms and complexity. In Proceedings of the ICM 2018, pages 3447-3487. World Scientific, 2019.
39. Raphael Yuster and Uri Zwick. Fast sparse matrix multiplication. ACM Trans. Algorithms, 1(1):2-13, 2005. URL: https://doi.org/10.1145/1077464.1077466.
X

Feedback for Dagstuhl Publishing