Chernoff-Hoeffding Bounds for Markov Chains: Generalized and Simplified

Authors Kai-Min Chung, Henry Lam, Zhenming Liu, Michael Mitzenmacher



PDF
Thumbnail PDF

File

LIPIcs.STACS.2012.124.pdf
  • Filesize: 0.63 MB
  • 12 pages

Document Identifiers

Author Details

Kai-Min Chung
Henry Lam
Zhenming Liu
Michael Mitzenmacher

Cite AsGet BibTex

Kai-Min Chung, Henry Lam, Zhenming Liu, and Michael Mitzenmacher. Chernoff-Hoeffding Bounds for Markov Chains: Generalized and Simplified. In 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 14, pp. 124-135, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)
https://doi.org/10.4230/LIPIcs.STACS.2012.124

Abstract

We prove the first Chernoff-Hoeffding bounds for general nonreversible finite-state Markov chains based on the standard L_1 (variation distance) mixing-time of the chain. Specifically, consider an ergodic Markov chain M and a weight function f: [n] -> [0,1] on the state space [n] of M with mean mu = E_{v <- pi}[f(v)], where pi is the stationary distribution of M. A t-step random walk (v_1,...,v_t) on M starting from the stationary distribution pi has expected total weight E[X] = mu t, where X = sum_{i=1}^t f(v_i). Let T be the L_1 mixing-time of M. We show that the probability of X deviating from its mean by a multiplicative factor of delta, i.e., Pr [ |X - mu t| >= delta mu t ], is at most exp(-Omega( delta^2 mu t / T )) for 0 <= delta <= 1, and exp(-Omega( delta mu t / T )) for delta > 1. In fact, the bounds hold even if the weight functions f_i's for i in [t] are distinct, provided that all of them have the same mean mu. We also obtain a simplified proof for the Chernoff-Hoeffding bounds based on the spectral expansion lambda of M, which is the square root of the second largest eigenvalue (in absolute value) of M tilde{M}, where tilde{M} is the time-reversal Markov chain of M. We show that the probability Pr [ |X - mu t| >= delta mu t ] is at most exp(-Omega( delta^2 (1-lambda) mu t )) for 0 <= delta <= 1, and exp(-Omega( delta (1-lambda) mu t )) for delta > 1. Both of our results extend to continuous-time Markov chains, and to the case where the walk starts from an arbitrary distribution x, at a price of a multiplicative factor depending on the distribution x in the concentration bounds.
Keywords
  • probabilistic analysis
  • tail bounds
  • Markov chains

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail