3 Search Results for "Chen, Yin"


Document
Improved Distributed Algorithms for Random Colorings

Authors: Charlie Carlson, Daniel Frishberg, and Eric Vigoda

Published in: LIPIcs, Volume 286, 27th International Conference on Principles of Distributed Systems (OPODIS 2023)


Abstract
Markov Chain Monte Carlo (MCMC) algorithms are a widely-used algorithmic tool for sampling from high-dimensional distributions, a notable example is the equilibirum distribution of graphical models. The Glauber dynamics, also known as the Gibbs sampler, is the simplest example of an MCMC algorithm; the transitions of the chain update the configuration at a randomly chosen coordinate at each step. Several works have studied distributed versions of the Glauber dynamics and we extend these efforts to a more general family of Markov chains. An important combinatorial problem in the study of MCMC algorithms is random colorings. Given a graph G of maximum degree Δ and an integer k ≥ Δ+1, the goal is to generate a random proper vertex k-coloring of G. Jerrum (1995) proved that the Glauber dynamics has O(nlog{n}) mixing time when k > 2Δ. Fischer and Ghaffari (2018), and independently Feng, Hayes, and Yin (2018), presented a parallel and distributed version of the Glauber dynamics which converges in O(log{n}) rounds for k > (2+ε)Δ for any ε > 0. We improve this result to k > (11/6-δ)Δ for a fixed δ > 0. This matches the state of the art for randomly sampling colorings of general graphs in the sequential setting. Whereas previous works focused on distributed variants of the Glauber dynamics, our work presents a parallel and distributed version of the more general flip dynamics presented by Vigoda (2000) (and refined by Chen, Delcourt, Moitra, Perarnau, and Postle (2019)), which recolors local maximal two-colored components in each step.

Cite as

Charlie Carlson, Daniel Frishberg, and Eric Vigoda. Improved Distributed Algorithms for Random Colorings. In 27th International Conference on Principles of Distributed Systems (OPODIS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 286, pp. 13:1-13:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{carlson_et_al:LIPIcs.OPODIS.2023.13,
  author =	{Carlson, Charlie and Frishberg, Daniel and Vigoda, Eric},
  title =	{{Improved Distributed Algorithms for Random Colorings}},
  booktitle =	{27th International Conference on Principles of Distributed Systems (OPODIS 2023)},
  pages =	{13:1--13:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-308-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{286},
  editor =	{Bessani, Alysson and D\'{e}fago, Xavier and Nakamura, Junya and Wada, Koichi and Yamauchi, Yukiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2023.13},
  URN =		{urn:nbn:de:0030-drops-195030},
  doi =		{10.4230/LIPIcs.OPODIS.2023.13},
  annote =	{Keywords: Distributed Graph Algorithms, Local Algorithms, Coloring, Glauber Dynamics, Sampling, Markov Chains}
}
Document
RANDOM
Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions

Authors: Zongchen Chen and Santosh S. Vempala

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
We study Hamiltonian Monte Carlo (HMC) for sampling from a strongly logconcave density proportional to e^{-f} where f:R^d -> R is mu-strongly convex and L-smooth (the condition number is kappa = L/mu). We show that the relaxation time (inverse of the spectral gap) of ideal HMC is O(kappa), improving on the previous best bound of O(kappa^{1.5}); we complement this with an example where the relaxation time is Omega(kappa). When implemented using a nearly optimal ODE solver, HMC returns an epsilon-approximate point in 2-Wasserstein distance using O~((kappa d)^{0.5} epsilon^{-1}) gradient evaluations per step and O~((kappa d)^{1.5}epsilon^{-1}) total time.

Cite as

Zongchen Chen and Santosh S. Vempala. Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 64:1-64:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX-RANDOM.2019.64,
  author =	{Chen, Zongchen and Vempala, Santosh S.},
  title =	{{Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{64:1--64:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.64},
  URN =		{urn:nbn:de:0030-drops-112790},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.64},
  annote =	{Keywords: logconcave distribution, sampling, Hamiltonian Monte Carlo, spectral gap, strong convexity}
}
Document
Forgetting and Update – an exploration

Authors: Abhaya Nayak, Yin Chen, and Fangzhen Lin

Published in: Dagstuhl Seminar Proceedings, Volume 7351, Formal Models of Belief Change in Rational Agents (2007)


Abstract
Knowledge Update (respectively Erasure) and Forgetting are two very different concepts, with very different underlying motivation. Both are tools for knowledge management; however while the former is meant for accommodating new knowledge into a knowledge corpus, the latter is meant for modifying – in fact reducing the expressivity – of the underlying language. In this paper we show that there is an intimate connection between these two concepts: a particular form of knowledge update and literal forgetting are inter-definable. This connection is exploited to enhance both our understanding of update as well as forgetting in this paper.

Cite as

Abhaya Nayak, Yin Chen, and Fangzhen Lin. Forgetting and Update – an exploration. In Formal Models of Belief Change in Rational Agents. Dagstuhl Seminar Proceedings, Volume 7351, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2007)


Copy BibTex To Clipboard

@InProceedings{nayak_et_al:DagSemProc.07351.12,
  author =	{Nayak, Abhaya and Chen, Yin and Lin, Fangzhen},
  title =	{{Forgetting and Update – an exploration}},
  booktitle =	{Formal Models of Belief Change in Rational Agents},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2007},
  volume =	{7351},
  editor =	{Giacomo Bonanno and James Delgrande and J\'{e}r\^{o}me Lang and Hans Rott},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.07351.12},
  URN =		{urn:nbn:de:0030-drops-12131},
  doi =		{10.4230/DagSemProc.07351.12},
  annote =	{Keywords: Knowledge Update, Erasure, Forgetting, Dalal Distance, Winslett Distance.}
}
  • Refine by Author
  • 1 Carlson, Charlie
  • 1 Chen, Yin
  • 1 Chen, Zongchen
  • 1 Frishberg, Daniel
  • 1 Lin, Fangzhen
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Random walks and Markov chains
  • 1 Theory of computation → Design and analysis of algorithms
  • 1 Theory of computation → Distributed algorithms
  • 1 Theory of computation → Graph algorithms analysis

  • Refine by Keyword
  • 1 Coloring
  • 1 Dalal Distance
  • 1 Distributed Graph Algorithms
  • 1 Erasure
  • 1 Forgetting
  • Show More...

  • Refine by Type
  • 3 document

  • Refine by Publication Year
  • 1 2007
  • 1 2019
  • 1 2024

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail