The Manifold Joys of Sampling (Invited Talk)

Authors Yin Tat Lee, Santosh S. Vempala



PDF
Thumbnail PDF

File

LIPIcs.ICALP.2022.4.pdf
  • Filesize: 0.95 MB
  • 20 pages

Document Identifiers

Author Details

Yin Tat Lee
  • University of Washington, Seattle, WA, USA
  • Microsoft Research, Seattle, WA, USA
Santosh S. Vempala
  • Georgia Tech, Atlanta, GA, USA

Acknowledgements

We thank Yunbum Kook and Andre Wibisono for helpful comments.

Cite AsGet BibTex

Yin Tat Lee and Santosh S. Vempala. The Manifold Joys of Sampling (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)
https://doi.org/10.4230/LIPIcs.ICALP.2022.4

Abstract

We survey recent progress and many open questions in the field of sampling high-dimensional distributions, with specific focus on sampling with non-Euclidean metrics.

Subject Classification

ACM Subject Classification
  • Theory of computation → Randomness, geometry and discrete structures
Keywords
  • Sampling
  • Diffusion
  • Optimization
  • High Dimension

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Kwangjun Ahn and Sinho Chewi. Efficient constrained sampling via the mirror-langevin algorithm. Advances in Neural Information Processing Systems, 34, 2021. Google Scholar
  2. Apostolos Chalkis and Vissarion Fisikopoulos. volEsti: Volume approximation and sampling for convex polytopes in R. arXiv preprint, 2020. URL: http://arxiv.org/abs/2007.01578.
  3. Yuansi Chen. An almost constant lower bound of the isoperimetric coefficient in the kls conjecture. Geometric and Functional Analysis, 31(1):34-61, 2021. Google Scholar
  4. Zongchen Chen and Santosh S. Vempala. Optimal convergence rate of hamiltonian monte carlo for strongly logconcave distributions. Theory of Computing, 18(9):1-18, 2022. URL: https://doi.org/10.4086/toc.2022.v018a009.
  5. Xiang Cheng, Niladri S Chatterji, Peter L Bartlett, and Michael I Jordan. Underdamped Langevin MCMC: a non-asymptotic analysis. In Conference on Learning Theory (COLT), pages 300-323. PMLR, 2018. Google Scholar
  6. Sinho Chewi, Murat A Erdogdu, Mufan Bill Li, Ruoqi Shen, and Matthew Zhang. Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev. arXiv preprint, 2021. URL: http://arxiv.org/abs/2112.12662.
  7. Ben Cousins and Santosh Vempala. A practical volume algorithm. Mathematical Programming Computation, 8(2):133-160, 2016. Google Scholar
  8. Ben Cousins and Santosh Vempala. Gaussian cooling and O*(n^3) algorithms for volume and gaussian volume. SIAM Journal on Computing, 47(3):1237-1273, 2018. Google Scholar
  9. Arnak S Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3):651-676, 2017. Google Scholar
  10. II Dikin. Iterative solution of problems of linear and quadratic programming. In Doklady Akademii Nauk, volume 174(4), pages 747-748. Russian Academy of Sciences, 1967. Google Scholar
  11. Alain Durmus, Szymon Majewski, and Błażej Miasojedow. Analysis of langevin monte carlo via convex optimization. The Journal of Machine Learning Research, 20(1):2666-2711, 2019. Google Scholar
  12. R. Eldan. Thin shell implies spectral gap up to polylog via a stochastic localization scheme. Geometric and Functional Analysis, 23:532-569, 2013. Google Scholar
  13. B. Fleury. Concentration in a thin Euclidean shell for log-concave measures. J. Funct. Anal., 259(4):832-841, 2010. Google Scholar
  14. Khashayar Gatmiry and Santosh S Vempala. Convergence of the riemannian langevin algorithm. arXiv preprint, 2022. URL: http://arxiv.org/abs/2204.10818.
  15. Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123-214, 2011. Google Scholar
  16. Olivier Guedon and Emanuel Milman. Interpolating thin-shell and sharp large-deviation estimates for isotropic log-concave measures. Geometric and Functional Analysis, 21(5):1043-1068, 2011. URL: https://doi.org/10.1007/s00039-011-0136-5.
  17. Hulda S Haraldsdóttir, Ben Cousins, Ines Thiele, Ronan MT Fleming, and Santosh Vempala. Chrr: coordinate hit-and-run with rounding for uniform sampling of constraint-based models. Bioinformatics, 33(11):1741-1743, 2017. Google Scholar
  18. Roland Hildebrand. Canonical barriers on convex cones. Mathematics of operations research, 39(3):841-850, 2014. Google Scholar
  19. He Jia, Aditi Laddha, Yin Tat Lee, and Santosh Vempala. Reducing isotropy and volume to KLS: an O^*(n³ ψ²) volume algorithm. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 961-974, 2021. Google Scholar
  20. Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the Fokker-Planck equation. SIAM Journal on Mathematical Analysis, 29(1):1-17, January 1998. Google Scholar
  21. R. Kannan, L. Lovász, and M. Simonovits. Isoperimetric problems for convex bodies and a localization lemma. Discrete & Computational Geometry, 13:541-559, 1995. Google Scholar
  22. R. Kannan, L. Lovász, and M. Simonovits. Random walks and an O^*(n⁵) volume algorithm for convex bodies. Random Structures and Algorithms, 11:1-50, 1997. Google Scholar
  23. Ravindran Kannan and Hariharan Narayanan. Random walks on polytopes and an affine interior point method for linear programming. Mathematics of Operations Research, 37(1):1-20, 2012. Google Scholar
  24. N. Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4(4):373-396, 1984. Google Scholar
  25. Bo'az Klartag and Joseph Lehec. Bourgain’s slicing problem and kls isoperimetry up to polylog. arXiv preprint, 2022. URL: http://arxiv.org/abs/2203.15551.
  26. Yunbum Kook, Yin Tat Lee, Ruoqi Shen, and Santosh S. Vempala. Sampling with riemannian hamiltonian monte carlo in a constrained space, 2022. URL: https://doi.org/10.48550/ARXIV.2202.01908.
  27. Aditi Laddha, Yin Tat Lee, and Santosh Vempala. Strong self-concordance and sampling. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 1212-1222, 2020. Google Scholar
  28. Yin Tat Lee and Aaron Sidford. Path finding methods for linear programming: Solving linear programs in sqrt(rank) iterations and faster algorithms for maximum flow. In Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 424-433, 2014. Google Scholar
  29. Yin Tat Lee and Santosh Vempala. Geodesic walks in polytopes. SIAM Journal on Computing, 51(2):STOC17-400-STOC17-488, 2022. URL: https://doi.org/10.1137/17M1145999.
  30. Yin Tat Lee and Santosh S Vempala. Convergence rate of Riemannian Hamiltonian Monte Carlo and faster polytope volume computation. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 1115-1121, 2018. Google Scholar
  31. Yin Tat Lee and Santosh S Vempala. The Kannan-Lovász-Simonovits conjecture. arXiv preprint, 2018. URL: http://arxiv.org/abs/1807.03465.
  32. Yin Tat Lee and Santosh Srinivas Vempala. Eldan’s stochastic localization and the KLS hyperplane conjecture: An improved lower bound for expansion. CoRR, abs/1612.01507, 2016. URL: http://arxiv.org/abs/1612.01507.
  33. Mufan Bill Li and Murat A Erdogdu. Riemannian langevin algorithm for solving semidefinite programs. arXiv preprint, 2020. URL: http://arxiv.org/abs/2010.11176.
  34. Xuechen Li, Yi Wu, Lester Mackey, and Murat A Erdogdu. Stochastic Runge-Kutta accelerates Langevin Monte Carlo and beyond. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Google Scholar
  35. L. Lovász. Hit-and-run mixes fast. Math. Prog., 86:443-461, 1998. Google Scholar
  36. L. Lovász and M. Simonovits. Random walks in a convex body and an improved volume algorithm. In Random Structures and Alg., volume 4, pages 359-412, 1993. Google Scholar
  37. L. Lovász and S. Vempala. Hit-and-run from a corner. SIAM J. Computing, 35:985-1005, 2006. Google Scholar
  38. L. Lovász and S. Vempala. The geometry of logconcave functions and sampling algorithms. Random Structures and Algorithms, 30(3):307-358, 2007. Google Scholar
  39. Oren Mangoubi and Aaron Smith. Rapid mixing of hamiltonian monte carlo on strongly log-concave distributions. arXiv preprint, 2017. URL: http://arxiv.org/abs/1708.07114.
  40. Radford M Neal et al. MCMC using Hamiltonian dynamics. Handbook of markov chain monte carlo, 2(11):2, 2011. Google Scholar
  41. Yurii Nesterov, Arkadii Nemirovskii, and Yinyu Ye. Interior-point polynomial algorithms in convex programming, volume 13. SIAM, 1994. Google Scholar
  42. S. Vempala. Geometric random walks: A survey. MSRI Combinatorial and Computational Geometry, 52:573-612, 2005. Google Scholar
  43. Santosh Vempala and Andre Wibisono. Rapid convergence of the Unadjusted Langevin Algorithm: Isoperimetry suffices. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Google Scholar
  44. Cédric Villani. Optimal transport: old and new, volume 338. Springer, 2009. Google Scholar
  45. Andre Wibisono. Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem. In Conference On Learning Theory, COLT 2018, Stockholm, Sweden, 6-9 July 2018, pages 2093-3027, 2018. URL: http://proceedings.mlr.press/v75/wibisono18a.html.
  46. Keru Wu, Scott Schmidler, and Yuansi Chen. Minimax mixing time of the metropolis-adjusted langevin algorithm for log-concave sampling. arXiv preprint, 2021. URL: http://arxiv.org/abs/2109.13055.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail