,
Dana Moshkovitz
,
Justin Oh
,
David Zuckerman
Creative Commons Attribution 4.0 International license
A natural model of a source of randomness consists of a long stream of symbols X = X_1∘…∘X_t, with some guarantee on the entropy of X_i conditioned on the outcome of the prefix x_1,… ,x_{i-1}. We study unpredictable sources, a generalization of the almost Chor-Goldreich (CG) sources considered in [Doron et al., 2023]. In an unpredictable source X, for a typical draw of x ∼ X, for most i-s, the element x_i has a low probability of occurring given x_1,… ,x_{i-1}. Such a model relaxes the often unrealistic assumption of a CG source that for every i, and every x_1,… ,x_{i-1}, the next symbol X_i has sufficiently large entropy. Unpredictable sources subsume all previously considered notions of almost CG sources, including notions that [Doron et al., 2023] failed to analyze, and including those that are equivalent to general sources with high min entropy.
For a lossless expander G = (V,E) with m = log |V|, we consider a random walk V_0,V_1,…,V_t on G using unpredictable instructions that have sufficient entropy with respect to m. Our main theorem is that for almost all the steps t/2 ≤ i ≤ t in the walk, the vertex V_i is close to a distribution with min-entropy at least m-O(1).
As a result, we obtain seeded online condensers with constant entropy gap, and seedless (deterministic) condensers outputting a constant fraction of the entropy. In particular, our condensers run in space comparable to the output entropy, as opposed to the size of the stream, and even when the length t of the stream is not known ahead of time. As another corollary, we obtain a new extractor based on expander random walks handling lower entropy than the classic expander based construction relying on spectral techniques [Gillman, 1998].
As our main technical tool, we provide a novel analysis covering a key case of adversarial random walks on lossless expanders that [Doron et al., 2023] fails to address. As part of the analysis, we provide a "chain rule for vertex probabilities". The standard chain rule states that for every x ∼ X and i, Pr(x_1,… ,x_i) = Pr[X_i = x_i|X_[1,i-1] = x_1,… ,x_{i-1}] ⋅ Pr(x_1,… ,x_{i-1}). If W(x₁,… ,x_i) is the vertex reached using x₁,… ,x_i, then the chain rule for vertex probabilities essentially states that the same phenomena occurs for a typical x: Pr [V_i = W(x_1,… ,x_i)] ≲ Pr[X_i = x_i|X_[1,i-1] = x_1,… ,x_{i-1}] ⋅ Pr[V_{i-1} = W(x_1,… ,x_{i-1})], where V_i is the vertex distribution of the random walk at step i using X.
@InProceedings{doron_et_al:LIPIcs.CCC.2025.30,
author = {Doron, Dean and Moshkovitz, Dana and Oh, Justin and Zuckerman, David},
title = {{Online Condensing of Unpredictable Sources via Random Walks}},
booktitle = {40th Computational Complexity Conference (CCC 2025)},
pages = {30:1--30:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-379-9},
ISSN = {1868-8969},
year = {2025},
volume = {339},
editor = {Srinivasan, Srikanth},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.30},
URN = {urn:nbn:de:0030-drops-237243},
doi = {10.4230/LIPIcs.CCC.2025.30},
annote = {Keywords: Randomness Extractors, Expander Graphs}
}