Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation

Authors Amol Aggarwal, Josh Alman



PDF
Thumbnail PDF

File

LIPIcs.CCC.2022.22.pdf
  • Filesize: 0.77 MB
  • 23 pages

Document Identifiers

Author Details

Amol Aggarwal
  • Department of Mathematics, Columbia University, New York, NY, USA
  • Institute for Advanced Study, Princeton, NJ, USA
Josh Alman
  • Department of Computer Science, Columbia University, New York, NY, USA

Acknowledgements

We would like to thank Alexei Borodin, Lijie Chen, Andrei Martinez-Finkelshtein, Sushant Sachdeva, Paris Siminelakis, Roald Trigub, Ryan Williams, and anonymous reviewers for helpful discussions and advice throughout this project.

Cite As Get BibTex

Amol Aggarwal and Josh Alman. Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 22:1-22:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022) https://doi.org/10.4230/LIPIcs.CCC.2022.22

Abstract

For any real numbers B ≥ 1 and δ ∈ (0,1) and function f: [0,B] → ℝ, let d_{B; δ}(f) ∈ ℤ_{> 0} denote the minimum degree of a polynomial p(x) satisfying sup_{x ∈ [0,B]} |p(x) - f(x)| < δ. In this paper, we provide precise asymptotics for d_{B; δ}(e^{-x}) and d_{B; δ}(e^x) in terms of both B and δ, improving both the previously known upper bounds and lower bounds. In particular, we show d_{B; δ}(e^{-x}) = Θ(max{√{B log(δ^{-1})}, log(δ^{-1})/{log(B^{-1} log(δ^{-1}))}}), and d_{B; δ}(e^{x}) = Θ(max{B, log(δ^{-1})/{log(B^{-1} log(δ^{-1}))}}), and we explicitly determine the leading coefficients in most parameter regimes.
Polynomial approximations for e^{-x} and e^x have applications to the design of algorithms for many problems, including in scientific computing, graph algorithms, machine learning, and statistics. Our degree bounds show both the power and limitations of these algorithms.
We focus in particular on the Batch Gaussian Kernel Density Estimation problem for n sample points in Θ(log n) dimensions with error δ = n^{-Θ(1)}. We show that the running time one can achieve depends on the square of the diameter of the point set, B, with a transition at B = Θ(log n) mirroring the corresponding transition in d_{B; δ}(e^{-x}):  
- When B = o(log n), we give the first algorithm running in time n^{1 + o(1)}. 
- When B = κ log n for a small constant κ > 0, we give an algorithm running in time n^{1 + O(log log κ^{-1} /log κ^{-1})}. The log log κ^{-1} /log κ^{-1} term in the exponent comes from analyzing the behavior of the leading constant in our computation of d_{B; δ}(e^{-x}). 
- When B = ω(log n), we show that time n^{2 - o(1)} is necessary assuming SETH.

Subject Classification

ACM Subject Classification
  • Theory of computation → Problems, reductions and completeness
  • Theory of computation → Approximation algorithms analysis
  • Mathematics of computing → Mathematical analysis
Keywords
  • polynomial approximation
  • kernel density estimation
  • Chebyshev polynomials

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Scott Aaronson and Yaoyun Shi. Quantum lower bounds for the collision and the element distinctness problems. Journal of the ACM (JACM), 51(4):595-605, 2004. Google Scholar
  2. Amir Abboud, Ryan Williams, and Huacheng Yu. More applications of the polynomial method to algorithm design. In Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms, pages 218-230. SIAM, 2014. Google Scholar
  3. Pankaj K Agarwal, Sariel Har-Peled, Kasturi R Varadarajan, et al. Geometric approximation via coresets. Combinatorial and computational geometry, 52(1), 2005. Google Scholar
  4. Thomas D Ahle, Michael Kapralov, Jakob BT Knudsen, Rasmus Pagh, Ameya Velingker, David P Woodruff, and Amir Zandieh. Oblivious sketching of high-degree polynomial kernels. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 141-160. SIAM, 2020. Google Scholar
  5. Josh Alman, Timothy M Chan, and Ryan Williams. Polynomial representations of threshold functions and algorithmic applications. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 467-476. IEEE, 2016. Google Scholar
  6. Josh Alman, Timothy Chu, Aaron Schild, and Zhao Song. Algorithms and hardness for linear algebra on geometric graphs. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 541-552, 2020. URL: https://doi.org/10.1109/FOCS46700.2020.00057.
  7. Josh Alman and Ryan Williams. Probabilistic polynomials and hamming nearest neighbors. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 136-150. IEEE, 2015. Google Scholar
  8. Andris Ambainis. Polynomial degree and lower bounds in quantum complexity: Collision and element distinctness with small range. Theory of Computing, 1(1):37-46, 2005. Google Scholar
  9. Arturs Backurs, Piotr Indyk, and Ludwig Schmidt. On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks. In Advances in Neural Information Processing Systems, pages 4308-4318, 2017. Google Scholar
  10. Mark Bun and Justin Thaler. Dual lower bounds for approximate degree and markov-bernstein inequalities. Information and Computation, 243:2-25, 2015. Google Scholar
  11. Mark Bun and Justin Thaler. Guest column: Approximate degree in classical and quantum computing. ACM SIGACT News, 51(4):48-72, 2021. Google Scholar
  12. Timothy M Chan and Ryan Williams. Deterministic apsp, orthogonal vectors, and more: Quickly derandomizing razborov-smolensky. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pages 1246-1255. SIAM, 2016. Google Scholar
  13. Moses Charikar and Paris Siminelakis. Hashing-based-estimators for kernel density in high dimensions. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 1032-1043. IEEE, 2017. Google Scholar
  14. Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language models. arXiv preprint, 2015. URL: http://arxiv.org/abs/1512.04906.
  15. Leslie Greengard and Vladimir Rokhlin. A fast algorithm for particle simulations. Journal of computational physics, 73(2):325-348, 1987. Google Scholar
  16. Marlis Hochbruck and Christian Lubich. On krylov subspace approximations to the matrix exponential operator. SIAM Journal on Numerical Analysis, 34(5):1911-1925, 1997. Google Scholar
  17. Armand Joulin, Moustapha Cissé, David Grangier, Hervé Jégou, et al. Efficient softmax approximation for gpus. In International Conference on Machine Learning, pages 1302-1310. PMLR, 2017. Google Scholar
  18. Cornelius Lanczos. An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. United States Governm. Press Office Los Angeles, CA, 1950. Google Scholar
  19. Jasper CH Lee, Jerry Li, Christopher Musco, Jeff M Phillips, and Wai Ming Tai. Finding the mode of a kernel density estimate. arXiv preprint, 2019. URL: http://arxiv.org/abs/1912.07673.
  20. J. C. Mason and D. C. Handscomb. Chebyshev polynomials. Chapman & Hall/CRC, Boca Raton, FL, 2003. Google Scholar
  21. Cameron Musco, Christopher Musco, and Aaron Sidford. Stability of the lanczos method for matrix function approximation. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1605-1624. SIAM, 2018. Google Scholar
  22. Peter Nilsson, Ateeq Ur Rahman Shaik, Rakesh Gangarajaiah, and Erik Hertz. Hardware implementation of the exponential function using taylor series. In 2014 NORCHIP, pages 1-4. IEEE, 2014. Google Scholar
  23. Noam Nisan and Mario Szegedy. On the degree of boolean functions as real polynomials. Computational complexity, 4(4):301-313, 1994. Google Scholar
  24. Lorenzo Orecchia, Sushant Sachdeva, and Nisheeth K Vishnoi. Approximating the exponential, the lanczos method and an o (m)-time spectral algorithm for balanced separator. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 1141-1160, 2012. Google Scholar
  25. Jeff M Phillips. ε-samples for kernels. In Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, pages 1622-1632. SIAM, 2013. Google Scholar
  26. Michael JD Powell. On the maximum errors of polynomial approximations defined by interpolation and by least squares criteria. The Computer Journal, 9(4):404-407, 1967. Google Scholar
  27. Aviad Rubinstein. Hardness of approximate nearest neighbor search. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1260-1268, 2018. Google Scholar
  28. Sushant Sachdeva and Nisheeth K Vishnoi. Faster algorithms via approximation theory. Foundations and Trendsregistered in Theoretical Computer Science, 9(2):125-210, 2014. Google Scholar
  29. Yaoyun Shi. Approximating linear restrictions of boolean functions. In Manuscript. Citeseer, 2002. Google Scholar
  30. Paraskevas Syminelakis. Fast Kernel Evaluation in High Dimensions: Importance Sampling and near Neighbor Search. Stanford University, 2019. Google Scholar
  31. A. F. Timan. Theory of approximation of functions of a real variable. Dover Publications, Inc., New York, 1994. Translated from the Russian by J. Berry, Translation edited and with a preface by J. Cossar, Reprint of the 1963 English translation. Google Scholar
  32. Lloyd N. Trefethen. Approximation theory and approximation practice. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2013. Google Scholar
  33. Changjiang Yang, Ramani Duraiswami, Nail A Gumerov, and Larry Davis. Improved fast gauss transform and efficient kernel density estimation. In Computer Vision, IEEE International Conference on, volume 2, pages 464-464. IEEE Computer Society, 2003. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail