Parallelising Glauber Dynamics

Author Holden Lee



PDF
Thumbnail PDF

File

LIPIcs.APPROX-RANDOM.2024.49.pdf
  • Filesize: 0.98 MB
  • 24 pages

Document Identifiers

Author Details

Holden Lee
  • Department of Applied Mathematics and Statistics, The Johns Hopkins University, Baltimore, MD, USA

Acknowledgements

I want to thank Leo Du, Frederic Koehler, and Nicolas Loizou for helpful discussions.

Cite AsGet BibTex

Holden Lee. Parallelising Glauber Dynamics. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 49:1-49:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2024.49

Abstract

For distributions over discrete product spaces ∏_{i=1}^n Ω_i', Glauber dynamics is a Markov chain that at each step, resamples a random coordinate conditioned on the other coordinates. We show that k-Glauber dynamics, which resamples a random subset of k coordinates, mixes k times faster in χ²-divergence, and assuming approximate tensorization of entropy, mixes k times faster in KL-divergence. We apply this to obtain parallel algorithms in two settings: (1) For the Ising model μ_{J,h}(x) ∝ exp(1/2 ⟨x,Jx⟩ + ⟨h,x⟩) with ‖J‖ < 1-c (the regime where fast mixing is known), we show that we can implement each step of Θ(n/‖J‖_F)-Glauber dynamics efficiently with a parallel algorithm, resulting in a parallel algorithm with running time Õ(‖J‖_F) = Õ(√n). (2) For the mixed p-spin model at high enough temperature, we show that with high probability we can implement each step of Θ(√n)-Glauber dynamics efficiently and obtain running time Õ(√n).

Subject Classification

ACM Subject Classification
  • Theory of computation → Random walks and Markov chains
Keywords
  • sampling
  • Ising model
  • parallel algorithm
  • Markov chain
  • Glauber dynamics

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Vedat Levi Alev and Lap Chi Lau. Improved analysis of higher order random walks and applications. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 1198-1211, 2020. URL: https://doi.org/10.1145/3357713.3384317.
  2. Yeganeh Alimohammadi, Nima Anari, Kirankumar Shiragur, and Thuy-Duong Vuong. Fractionally log-concave and sector-stable polynomials: counting planar matchings and more. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 433-446, 2021. URL: https://doi.org/10.1145/3406325.3451123.
  3. Jason M Altschuler and Sinho Chewi. Faster high-accuracy log-concave sampling via algorithmic warm starts. arXiv preprint, 2023. URL: https://arxiv.org/abs/2302.10249.
  4. Nima Anari, Callum Burgess, Kevin Tian, and Thuy-Duong Vuong. Quadratic speedups in parallel sampling from determinantal distributions. In Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures, pages 367-377, 2023. URL: https://doi.org/10.1145/3558481.3591104.
  5. Nima Anari, Shayan Oveis Gharan, and Cynthia Vinzant. Log-concave polynomials, entropy, and a deterministic approximation algorithm for counting bases of matroids. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 35-46. IEEE, 2018. URL: https://doi.org/10.1109/FOCS.2018.00013.
  6. Nima Anari, Nathan Hu, Amin Saberi, and Aaron Schild. Sampling arborescences in parallel. arXiv preprint, 2020. URL: https://arxiv.org/abs/2012.09502.
  7. Nima Anari, Yizhi Huang, Tianyu Liu, Thuy-Duong Vuong, Brian Xu, and Katherine Yu. Parallel discrete sampling via continuous walks. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 103-116, 2023. URL: https://doi.org/10.1145/3564246.3585207.
  8. Nima Anari, Vishesh Jain, Frederic Koehler, Huy Tuan Pham, and Thuy-Duong Vuong. Entropic independence i: Modified log-sobolev inequalities for fractionally log-concave distributions and high-temperature ising models. arXiv preprint, 2021. URL: https://arxiv.org/abs/2106.04105.
  9. Nima Anari, Vishesh Jain, Frederic Koehler, Huy Tuan Pham, and Thuy-Duong Vuong. Universality of spectral independence with applications to fast mixing in spin glasses. arXiv preprint, 2023. URL: https://arxiv.org/abs/2307.10466.
  10. Roland Bauerschmidt and Thierry Bodineau. A very simple proof of the lsi for high temperature spin systems. Journal of Functional Analysis, 276(8):2582-2588, 2019. Google Scholar
  11. Dina Bitton, David J DeWitt, David K Hsaio, and Jaishankar Menon. A taxonomy of parallel sorting. ACM Computing Surveys (CSUR), 16(3):287-318, 1984. URL: https://doi.org/10.1145/2514.2516.
  12. Antonio Blanca, Pietro Caputo, Zongchen Chen, Daniel Parisi, Daniel Štefankovič, and Eric Vigoda. On mixing of markov chains: Coupling, spectral independence, and entropy factorization. In Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3670-3692. SIAM, 2022. URL: https://doi.org/10.1137/1.9781611977073.145.
  13. Alexandre Bristiel and Pietro Caputo. Entropy inequalities for random walks and permutations. arXiv preprint, 2021. URL: https://arxiv.org/abs/2109.06009.
  14. Djalil Chafaï. Entropies, convexity, and functional inequalities, on ϕ-entropies and ϕ-sobolev inequalities. Journal of Mathematics of Kyoto University, 44(2):325-363, 2004. Google Scholar
  15. George HG Chen and R Tyrrell Rockafellar. Convergence rates in forward-backward splitting. SIAM Journal on Optimization, 7(2):421-444, 1997. URL: https://doi.org/10.1137/S1052623495290179.
  16. Yongxin Chen, Sinho Chewi, Adil Salim, and Andre Wibisono. Improved analysis for a proximal algorithm for sampling. In Conference on Learning Theory, pages 2984-3014. PMLR, 2022. URL: https://proceedings.mlr.press/v178/chen22c.html.
  17. Yuansi Chen. An almost constant lower bound of the isoperimetric coefficient in the kls conjecture. Geometric and Functional Analysis, 31:34-61, 2021. Google Scholar
  18. Yuansi Chen and Ronen Eldan. Localization schemes: A framework for proving mixing bounds for markov chains. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 110-122. IEEE, 2022. Google Scholar
  19. Zongchen Chen, Kuikui Liu, and Eric Vigoda. Optimal mixing of glauber dynamics: Entropy factorization via high-dimensional expansion. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 1537-1550, 2021. URL: https://doi.org/10.1145/3406325.3451035.
  20. Mary Cryan, Heng Guo, and Giorgos Mousa. Modified log-sobolev inequalities for strongly log-concave distributions. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1358-1370. IEEE, 2019. URL: https://doi.org/10.1109/FOCS.2019.00083.
  21. Persi Diaconis and Mehrdad Shahshahani. Time to reach stationarity in the bernoulli-laplace diffusion model. SIAM Journal on Mathematical Analysis, 18(1):208-218, 1987. Google Scholar
  22. Ahmed El Alaoui, Andrea Montanari, and Mark Sellke. Sampling from the sherrington-kirkpatrick gibbs measure via algorithmic stochastic localization. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 323-334. IEEE, 2022. URL: https://doi.org/10.1109/FOCS54457.2022.00038.
  23. Ronen Eldan. Thin shell implies spectral gap up to polylog via a stochastic localization scheme. Geometric and Functional Analysis, 23(2):532-569, 2013. Google Scholar
  24. Ronen Eldan, Frederic Koehler, and Ofer Zeitouni. A spectral condition for spectral gap: fast mixing in high-temperature ising models. Probability theory and related fields, 182(3-4):1035-1051, 2022. Google Scholar
  25. Jiaojiao Fan, Bo Yuan, and Yongxin Chen. Improved dimension dependence of a proximal algorithm for sampling. arXiv preprint, 2023. URL: https://arxiv.org/abs/2302.10081.
  26. Weiming Feng, Thomas P Hayes, and Yitong Yin. Distributed metropolis sampler with optimal parallelism. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2121-2140. SIAM, 2021. URL: https://doi.org/10.1137/1.9781611976465.127.
  27. Yuval Filmus, Ryan O’Donnell, and Xinyu Wu. Log-sobolev inequality for the multislice, with applications. Electronic Journal of Probability, 27:1-30, 2022. Google Scholar
  28. David A Freedman. On tail probabilities for martingales. the Annals of Probability, pages 100-118, 1975. Google Scholar
  29. Will Grathwohl, Kevin Swersky, Milad Hashemi, David Duvenaud, and Chris Maddison. Oops i took a gradient: Scalable sampling for discrete distributions. In International Conference on Machine Learning, pages 3831-3841. PMLR, 2021. URL: http://proceedings.mlr.press/v139/grathwohl21a.html.
  30. John Hubbard. Calculation of partition functions. Physical Review Letters, 3(2):77, 1959. Google Scholar
  31. Vishesh Jain, Frederic Koehler, and Andrej Risteski. Mean-field approximation, convex hierarchies, and the optimality of correlation rounding: a unified perspective. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 1226-1236, 2019. URL: https://doi.org/10.1145/3313276.3316299.
  32. Frederic Koehler, Holden Lee, and Andrej Risteski. Sampling approximately low-rank ising models: Mcmc meets variational methods. In Conference on Learning Theory, pages 4945-4988. PMLR, 2022. URL: https://proceedings.mlr.press/v178/koehler22a.html.
  33. Tzong-Yow Lee and Horng-Tzer Yau. Logarithmic sobolev inequality for some models of random walks. The Annals of Probability, 26(4):1855-1873, 1998. Google Scholar
  34. Yin Tat Lee, Ruoqi Shen, and Kevin Tian. Structured logconcave sampling with a restricted gaussian oracle. In Conference on Learning Theory, pages 2993-3050. PMLR, 2021. URL: http://proceedings.mlr.press/v134/lee21a.html.
  35. Hongyang Liu and Yitong Yin. Simple parallel algorithms for single-site dynamics. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pages 1431-1444, 2022. URL: https://doi.org/10.1145/3519935.3519999.
  36. Nicolas Loizou, Hugo Berard, Gauthier Gidel, Ioannis Mitliagkas, and Simon Lacoste-Julien. Stochastic gradient descent-ascent and consensus optimization for smooth games: Convergence analysis under expected co-coercivity. Advances in Neural Information Processing Systems, 34:19095-19108, 2021. URL: https://proceedings.neurips.cc/paper/2021/hash/9f96f36b7aae3b1ff847c26ac94c604e-Abstract.html.
  37. Neal Madras and Dana Randall. Markov chain decomposition for convergence rate analysis. Annals of Applied Probability, pages 581-606, 2002. Google Scholar
  38. Andrea Montanari. Sampling, diffusions, and stochastic localization. arXiv preprint, 2023. URL: https://arxiv.org/abs/2305.10690.
  39. Andrea Montanari and Yuchen Wu. Posterior sampling from the spiked models via diffusion processes. arXiv preprint, 2023. URL: https://arxiv.org/abs/2304.11449.
  40. Ravi Montenegro and Prasad Tetali. Mathematical aspects of mixing times in markov chains. Foundations and Trendsregistered in Theoretical Computer Science, 1(3):237-354, 2006. URL: https://doi.org/10.1561/0400000003.
  41. Izhar Oppenheim. Local spectral expansion approach to high dimensional expanders part I: Descent of spectral gaps. Discrete & Computational Geometry, 59(2):293-330, 2018. URL: https://doi.org/10.1007/s00454-017-9948-x.
  42. Benjamin Rhodes and Michael Gutmann. Enhanced gradient-based mcmc in discrete spaces. arXiv preprint, 2022. URL: https://arxiv.org/abs/2208.00040.
  43. Justin Salez. A sharp log-Sobolev inequality for the multislice. Ann. H. Lebesgue, 4:1143-1161, 2021. URL: https://doi.org/10.5802/ahl.99.
  44. Holger Sambale and Arthur Sinulis. Modified log-sobolev inequalities and two-level concentration. arXiv preprint, 2019. URL: https://arxiv.org/abs/1905.06137.
  45. Ruoqi Shen and Yin Tat Lee. The randomized midpoint method for log-concave sampling. Advances in Neural Information Processing Systems, 32, 2019. Google Scholar
  46. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015. URL: http://proceedings.mlr.press/v37/sohl-dickstein15.html.
  47. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019. Google Scholar
  48. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint, 2020. URL: https://arxiv.org/abs/2011.13456.
  49. Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. Google Scholar
  50. Ruqi Zhang, Xingchao Liu, and Qiang Liu. A langevin-like sampler for discrete distributions. In International Conference on Machine Learning, pages 26375-26396. PMLR, 2022. URL: https://proceedings.mlr.press/v162/zhang22t.html.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail