LIPIcs.DISC.2018.26.pdf
- Filesize: 478 kB
- 11 pages
Sampling constitutes an important tool in a variety of areas: from machine learning and combinatorial optimization to computational physics and biology. A central class of sampling algorithms is the Markov Chain Monte Carlo method, based on the construction of a Markov chain with the desired sampling distribution as its stationary distribution. Many of the traditional Markov chains, such as the Glauber dynamics, do not scale well with increasing dimension. To address this shortcoming, we propose a simple local update rule based on the Glauber dynamics that leads to efficient parallel and distributed algorithms for sampling from Gibbs distributions. Concretely, we present a Markov chain that mixes in O(log n) rounds when Dobrushin's condition for the Gibbs distribution is satisfied. This improves over the LubyGlauber algorithm by Feng, Sun, and Yin [PODC'17], which needs O(Delta log n) rounds, and their LocalMetropolis algorithm, which converges in O(log n) rounds but requires a considerably stronger mixing condition. Here, n denotes the number of nodes in the graphical model inducing the Gibbs distribution, and Delta its maximum degree. In particular, our method can sample a uniform proper coloring with alpha Delta colors in O(log n) rounds for any alpha >2, which almost matches the threshold of the sequential Glauber dynamics and improves on the alpha>2 + sqrt{2} threshold of Feng et al.
Feedback for Dagstuhl Publishing