Sparse Recovery for Orthogonal Polynomial Transforms

Authors Anna Gilbert, Albert Gu, Christopher Ré, Atri Rudra, Mary Wootters



PDF
Thumbnail PDF

File

LIPIcs.ICALP.2020.58.pdf
  • Filesize: 0.57 MB
  • 16 pages

Document Identifiers

Author Details

Anna Gilbert
  • Department of Mathematics, Yale University, New Haven, CT, USA
Albert Gu
  • Department of Computer Science, Stanford University, CA, USA
Christopher Ré
  • Department of Computer Science, Stanford University, CA, USA
Atri Rudra
  • Department of Computer Science and Engineering, University at Buffalo, NY, USA
Mary Wootters
  • Departments of Computer Science and Electrical Engineering, Stanford University, CA, USA

Acknowledgements

We would like to thank Mark Iwen for useful conversations. Thanks also to Stefan Steinerberger for showing us the proof of a number theory lemma we used in the 1-sparse Jacobi solver and graciously allowing us to use it, and to Clément Canonne for helpful comments on our manuscript. We also thank Eric Winsor for pointing out an error in a lemma in an earlier version of this paper. Thanks also to anonymous reviewers who pointed out some missing references and whose comments improved the presentation of the paper.

Cite As Get BibTex

Anna Gilbert, Albert Gu, Christopher Ré, Atri Rudra, and Mary Wootters. Sparse Recovery for Orthogonal Polynomial Transforms. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 58:1-58:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020) https://doi.org/10.4230/LIPIcs.ICALP.2020.58

Abstract

In this paper we consider the following sparse recovery problem. We have query access to a vector 𝐱 ∈ ℝ^N such that x̂ = 𝐅 𝐱 is k-sparse (or nearly k-sparse) for some orthogonal transform 𝐅. The goal is to output an approximation (in an 𝓁₂ sense) to x̂ in sublinear time. This problem has been well-studied in the special case that 𝐅 is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time O(k ⋅ polylog N). However, for transforms 𝐅 other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled.
In this paper we give sublinear-time algorithms - running in time poly(k log(N)) - for solving the sparse recovery problem for orthogonal transforms 𝐅 that arise from orthogonal polynomials. More precisely, our algorithm works for any 𝐅 that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. 
Our approach is to give a very general reduction from the k-sparse sparse recovery problem to the 1-sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials. Frequently, sparse FFT algorithms are described as implementing such a reduction; however, the technical details of such works are quite specific to the Fourier transform and moreover the actual implementations of these algorithms do not use the 1-sparse algorithm as a black box. In this work we give a reduction that works for a broad class of orthogonal polynomial families, and which uses any 1-sparse recovery algorithm as a black box.

Subject Classification

ACM Subject Classification
  • Theory of computation → Streaming, sublinear and near linear time algorithms
  • Theory of computation → Algorithm design techniques
Keywords
  • Orthogonal polynomials
  • Jacobi polynomials
  • sublinear algorithms
  • sparse recovery

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. URL: https://en.wikipedia.org/wiki/Approximation_theory#Chebyshev_approximation.
  2. URL: https://en.wikipedia.org/wiki/Jacobi_polynomials#Differential_equation.
  3. URL: http://www.chebfun.org.
  4. Jaroslaw Blasiok, Patrick Lopatto, Kyle Luh, Jake Marcinek, and Shravas Rao. An improved lower bound for sparse reconstruction from subsampled hadamard matrices. In David Zuckerman, editor, 60th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2019, Baltimore, Maryland, USA, November 9-12, 2019, pages 1564-1567. IEEE Computer Society, 2019. URL: https://doi.org/10.1109/FOCS.2019.00091.
  5. Jean Bourgain. An Improved Estimate in the Restricted Isometry Problem, pages 65-70. Springer International Publishing, Cham, 2014. URL: https://doi.org/10.1007/978-3-319-09477-9_5.
  6. James Bremer, Qiyuan Pang, and Haizhao Yang. Fast Algorithms for the Multi-dimensional Jacobi Polynomial Transform. arXiv e-prints, January 2019. URL: http://arxiv.org/abs/1901.07275.
  7. James Bremer and Haizhao Yang. Fast algorithms for Jacobi expansions via nonoscillatory phase functions. arXiv e-prints, March 2018. URL: http://arxiv.org/abs/1803.03889.
  8. Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh. An adaptive sublinear-time block sparse fourier transform. In Hamed Hatami, Pierre McKenzie, and Valerie King, editors, Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 702-715. ACM, 2017. URL: https://doi.org/10.1145/3055399.3055462.
  9. Xue Chen, Daniel M. Kane, Eric Price, and Zhao Song. Fourier-sparse interpolation without a frequency gap. In Irit Dinur, editor, IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 741-750. IEEE Computer Society, 2016. URL: https://doi.org/10.1109/FOCS.2016.84.
  10. Bosu Choi, Mark Iwen, and Felix Krahmer. Sparse Harmonic Transforms: A New Class of Sublinear-time Algorithms for Learning Functions of Many Variables. arXiv 1808.04932, 2018. URL: http://arxiv.org/abs/1808.04932.
  11. Bosu Choi, Mark Iwen, and Toni Volkmer. Sparse harmonic transforms ii: Best s-term approximation guarantees for bounded orthonormal product bases in sublinear-time. arXiv preprint, 2019. URL: http://arxiv.org/abs/1909.09564.
  12. John D. Cook. Orthogonal polynomials and gaussian quadrature. URL: https://www.johndcook.com/OrthogonalPolynomials.pdf.
  13. Tri Dao, Christopher M De Sa, and Christopher Ré. Gaussian quadrature for kernel features. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6107-6117. Curran Associates, Inc., 2017. URL: http://papers.nips.cc/paper/7191-gaussian-quadrature-for-kernel-features.pdf.
  14. Christopher De Sa, Albert Cu, Rohan Puttagunta, Christopher Ré, and Atri Rudra. A two-pronged progress in structured dense matrix vector multiplication. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1060-1079. SIAM, 2018. Google Scholar
  15. J. R. Driscoll, D. M. Healy, Jr., and D. N. Rockmore. Fast discrete polynomial transforms with applications to data analysis for distance transitive graphs. SIAM J. Comput., 26(4):1066-1099, August 1997. URL: https://doi.org/10.1137/S0097539792240121.
  16. Simon Foucart and Holger Rauhut. A Mathematical Introduction to Compressive Sensing. Springer Science & Business Media, August 2013. Google Scholar
  17. A. C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, and M. Strauss. Near-optimal sparse fourier representations via sampling. In Proceedings of the Thiry-fourth Annual ACM Symposium on Theory of Computing, STOC '02, pages 152-161, New York, NY, USA, 2002. ACM. URL: https://doi.org/10.1145/509907.509933.
  18. Anna C Gilbert, Piotr Indyk, Mark Iwen, and Ludwig Schmidt. Recent Developments in the Sparse Fourier Transform. IEEE Signal Processing Magazine, 2014. URL: https://doi.org/10.1109/MSP.2014.2329131.
  19. Haitham Hassanieh, Piotr Indyk, Dina Katabi, and Eric Price. Nearly optimal sparse fourier transform. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 563-578. ACM, 2012. Google Scholar
  20. Ishay Haviv and Oded Regev. The restricted isometry property of subsampled fourier matrices. In Geometric aspects of functional analysis, pages 163-179. Springer, 2017. Google Scholar
  21. Xianfeng Hu, Mark Iwen, and Hyejin Kim. Rapidly computing sparse legendre expansions via sparse fourier transforms. Numerical Algorithms, 74(4), 2017. Google Scholar
  22. Y. Hua and T. K. Sarkar. Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in noise. IEEE Transactions on Acoustics, Speech, and Signal Processing, 38(5):814-824, May 1990. URL: https://doi.org/10.1109/29.56027.
  23. Piotr Indyk and Michael Kapralov. Sample-optimal fourier sampling in any constant dimension. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 514-523. IEEE, 2014. Google Scholar
  24. Piotr Indyk, Michael Kapralov, and Eric Price. (nearly) sample-optimal sparse fourier transform. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, pages 480-499. Society for Industrial and Applied Mathematics, 2014. Google Scholar
  25. Michael Kapralov. Sparse fourier transform in any constant dimension with nearly-optimal sample complexity in sublinear time. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 264-277. ACM, 2016. URL: https://doi.org/10.1145/2897518.2897650.
  26. Michael Kapralov. Sample efficient estimation and recovery in sparse FFT via isolation on average. In Chris Umans, editor, 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 651-662. IEEE Computer Society, 2017. URL: https://doi.org/10.1109/FOCS.2017.66.
  27. Michael Kapralov, Ameya Velingker, and Amir Zandieh. Dimension-independent sparse fourier transform. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 2709-2728, 2019. URL: https://doi.org/10.1137/1.9781611975482.168.
  28. Cameron Musco. Chebyshev polynomials in tcs and algorithm design. URL: http://www.cameronmusco.com/personal_site/pdfs/retreatTalk.pdf.
  29. Thomas Peter and Gerlind Plonka. A generalized prony method for reconstruction of sparse sums of eigenfunctions of linear operators. Inverse Problems, 29(2):025001, January 2013. URL: https://doi.org/10.1088/0266-5611/29/2/025001.
  30. Daniel Potts and Manfred Tasche. Parameter estimation for exponential sums by approximate prony method. Signal Processing, 90(5):1631-1642, 2010. Special Section on Statistical Signal and Array Processing. URL: https://doi.org/10.1016/j.sigpro.2009.11.012.
  31. Daniel Potts and Manfred Tasche. Reconstruction of sparse legendre and gegenbauer expansions. BIT Numerical Mathematics, 56(3):1019-1043, 2016. Google Scholar
  32. Holger Rauhut and Rachel Ward. Sparse legendre expansions via 𝓁₁-minimization. J. Approx. Theory, 164(5):517-533, May 2012. URL: https://doi.org/10.1016/j.jat.2012.01.008.
  33. T.J. Rivlin. An Introduction to the Approximation of Functions. Blaisdell book in numerical analysis and computer science. Dover Publications, 1981. URL: https://books.google.com/books?id=wtS2xm8ggA4C.
  34. Mark Rudelson and Roman Vershynin. On sparse reconstruction from fourier and gaussian measurements. Communications on Pure and Applied Mathematics, 61(8):1025-1045, 2008. URL: https://doi.org/10.1002/cpa.20227.
  35. G. Szegö. Orthogonal Polynomials. Number v. 23 in American Mathematical Society colloquium publications. American Mathematical Society, 1975. URL: https://books.google.com/books?id=3hcW8HBh7gsC.
  36. William J. Tango. The circle polynomials of zernike and their application in optics. Applied physics, 13(4):327-332, August 1977. URL: https://doi.org/10.1007/BF00882606.
  37. Anna Thomas, Albert Gu, Tri Dao, Atri Rudra, and Christopher Ré. Learning compressed transforms with low displacement rank. In Advances in neural information processing systems, pages 9052-9060, 2018. Google Scholar
  38. Aaron Voelker, Ivana Kajić, and Chris Eliasmith. Legendre memory units: Continuous-time representation in recurrent neural networks. In Advances in Neural Information Processing Systems, pages 15544-15553, 2019. Google Scholar
  39. Kun-Hsing Yu, Ce Zhang, Gerald J Berry, Russ B Altman, Christopher Ré, Daniel L Rubin, and Michael Snyder. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nature Communications, 7, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail