Simplifying Armoni’s PRG
Abstract
We propose a simple variant of the INW pseudo-random generator, where blocks have varying lengths, and prove it gives the same parameters as the more complicated construction of Armoni’s . This shows there is no need for the specialized s of Nisan and Zuckerman and Armoni, and they can be obtained as simple variants of INW.
For the construction to work we need space-efficient extractors with tiny entropy loss. We use the extractors from [2] instead of [6] taking advantage of the very high min-entropy regime we work with. We remark that using these extractors has the additional benefit of making the dependence on the branching program alphabet correct.
Keywords and phrases:
PRG, ROBP, read-once, random, psuedorandom, armoni, derandomizationCategory:
RANDOMFunding:
Ben Chen: Israel Science Foundation (grant number 443/22).Copyright and License:
2012 ACM Subject Classification:
Theory of computation Pseudorandomness and derandomizationEditors:
Alina Ene and Eshan ChattopadhyaySeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
In this paper we revisit the problem of constructing pseudo-random generators (s) against bounded width read-once branching programs (s).111For a formal definition of and see Definitions 1 and 2. Nisan [9] constructed a –fooling length width over with seed length using pair-wise independence. Impagliazzo, Nisan and Wigderson [7] gave a variant of the construction with a similar seed length, but using expanders, extractors or samplers instead of pair-wise independence. The INW has seed length .
For the special case when the width of the is much larger then the length of the and the error, i.e., for some constant c, Nisan and Zuckerman [10] constructed a different with an optimal seed length . Armoni [1] recursively used the NZ construction, and with a careful analysis obtained a with seed length for constant size where for some constant .
In [8] Armoni’s PRG is instantiated with a specific space-explicit extractor removing the parameters limit, and the resulting seed length is . Armoni’s with [8] bridges the gap between INW and NZ - it asymptotically matches INW when is large, and it matches NZ when is small compared to .
In this paper we construct another that matches the parameters obtained in Armoni. The surprising thing about our construction is that it avoids altogether the NZ , and also avoids the recursion in Armoni, and instead it is a direct, simple variant of INW. Saying it differently, Armoni interpolates between two different constructions: the NZ and the INW , and his construction is a recursive combination of both. Instead, we show that a simple variant of INW alone gives the same bounds for all parameter regimes.
To understand our new construction, we revisit the INW construction. The main building block behind the INW construction is showing that one can recycle the randomness given to a , where the recycling is done using an expander, or more generally, a sampler or an extractor. Specifically, instead of applying twice a with two independent seeds, one can apply it once (consuming random bits) and then use a short seed (of length ) to recycle the bits and get from it a new seed for the second application of the . The constructions in [9, 7] preserve the same seed lengths for the two applications of the s (we say, in short, that INW preserves the block length).
Nisan and Zuckerman observe that the seed needed for the extractor has length while the string it acts upon has length , and so when , instead of applying the extractor once, it is more beneficial to apply it times with independent seeds of length . Extending these ideas, Armoni implements this idea recursively.
In this paper we suggest an alternative to NZ and Armoni where instead of taking a large so that , we keep and instead track down our losses. Next, we explain this idea in detail.
Suppose we start with a seed of length and we want to create from it two seeds for two independent applications of the . The way INW achieves this is by carving out bits from and reserving them for the extractor application. Specifically, say where has length . Then, in INW, the first is called with the seed , and the second is called with the seed , where is an extractor. Thus, the seed length for the first application is and the seed length for the second application is the extractor output size, and it suffers the losses the extractor suffers. These include:
-
The amount of information the machine knows about after seeing the output of the first . This loss amounts to .
-
The extractor entropy loss, which is in non-explicit constructions. For the time being let us assume we can achieve a loss explicitly, and then this loss is comparable with the first loss.
Thus, the seed loss in the first application is , whereas the loss in the second application is larger and is . The two losses are quite different when .
We then do the obvious. We define a recursive construction, where starting with a seed length we get two recursive calls, one with seed length and one with seed length , where and . Doing the analysis we find out that we recover the bound in a much simpler way.
To summarize, when , it is much better to employ the INW strategy with varying block lengths, and doing so gives the correct parameters in a straight forward way. However, NZ and later Armoni try to preserve the block length throughout different calls. This forces NZ to use several calls of the extractor, and later forces Armoni to use specially crafted recursion (and a more careful analysis). What we show here is that all of that is unnecessary, and with varying block lengths the INW construction obtains the same parameters as NZ and Armoni.
For our construction to work, we require space-efficient extractors with small entropy loss. In general, there are non-explicit extractors with entropy loss , i.e., , and notice that the entropy loss is independent of . Kane et al. in [8] instantiated Armoni’s with the GUV extractor [6, Theorem 5.10] which has entropy-loss, and they show it is space-efficient. However, loss is too much for us.
In general, the best explicit extractors have entropy-loss [5, 11]. What saves us here is that we do not need general extractors but rather extractors working in the very-high min-entropy regime, where is small. In this regieme one can split the source to a large block followed by a second small block of length , and then use a block-wise extractor. This was implemented in [2]. The resulting extractor is cited in Theorem 4, has low entropy-loss and is space-efficient.
Replacing the lossy extractor used in [8] with the small entropy-loss extractor of [2] has other benefits. It turns out this also reduces the additive dependence on from to . We summarize the parameters obtained by the previous constructions and by our construction in Table 1.
| Seed Size | Reference | Remarks |
|---|---|---|
| [9] | ||
| [7] | ||
| [10] | When for some | |
| [1] | With the extractors of [8] | |
| This paper | Also, [1] with the extractor of Theorem 4 |
We remark that while the improvement in the dependence of may seem insignificant, it can help simplify certain constructions. A notable example is the recent work of Cheng and Wu [3] which employs an iterative process of alternating steps of length and alphabet reductions. Using our (and [2] extractor) in the length reduction, the alphabet reduction becomes unnecessary.
2 Preliminaries
denotes the set . For a matrix M and , is the value of M at the ’th row and ’th column. is the spectral norm of . For every there is a corresponding boolean matrix such that iff . We denote the set of such matrices by (stochastic, boolean matrices).
Definition 1 ().
Let be an arbitrary subset, . is a width length read once branching program () on alphabet if it is a sequence of functions , with . The evaluation of on input is the linear operator . We also say is a .
Definition 2 (PRG).
Let be an arbitrary subset and . A pseudo random generator is a function . For a we say -fools if for every we have:
2.1 Extractors
denotes the uniform distribution on n bits. The min-entropy of a random source , denoted , is . The statistical distance between two random random variables defined over a domain is . The statistical distance can be equivalently defined as .
Definition 3 (extractor).
Let and . A function is a extractor if for every random variable over with it holds that
Theorem 4 ([2] High min-entropy extractor with a small entropy loss).
For , , there is a family of extractors that is a extractor with and entropy loss . Furthermore, can be computed in time and space for all .
Cohen et al. in [4] used a more careful space analysis to show a different space bound . For our analysis the bound of Chattopadhyay and Liao is sufficient.
3 The new PRG
As explained in the introduction, we apply the INW approach, each time replacing a seed with two different (shorter) seeds. Unlike INW the two seeds have different lengths. Specifically, if we start with an initial seed of length , then we can only pass bits to the first application of the , because we need to keep independent bits for the recycling step. When we recycle the randomness, say with an extractor, and get a new seed for the second , we can recover these bits but we have two new losses:
-
An entropy loss of order , where is the extractor error, which we take to be , where is the final error, and is the final number of blocks, and,
-
A loss, that is due because of the information collected in the width branching program after seeing the output of the first .
Thus, for the second application we lose bits. As it is significantly smaller than when . To summarize, we replace a length seed, with two seeds, one of length and the other of length where and , where the constants behind the big O notation are essentially determined by the seed length and the entropy loss of the explicit extractor that we use (plus an additive added to ).
Having the recycling building block, we use it recursively and define a sequence of s. Applying this idea in a tree-like construction, we have the following construction:
Construction 5 (INW with varying block lengths).
Given a set of size , parameters and a family
define a family of s by
where for and for .
Theorem 6.
Let , , a set of size . There is a large enough constant s.t. setting , , and assuming a family of extractors, for , we have that as in Construction 5 is a -fooling with .
We need to analyze the output length of the generator and to prove correctness. We start by analyzing the output length of as a function of the seed length .
3.1 The seed length
Recall that is the number of symbols the outputs. Conversely, let us denote by the seed length needed to output symbols, i.e., the minimal integer such that . Then,
Lemma 7 (seed size).
.
To gain intuition, think of the recursion in Construction 5 as a tree, where at the root we have our initial seed, and every non-leaf vertex with seed length has two children: a left child with seed length , and a right child with seed length . The leaves are vertices with . A path in the tree is a sequence of left and right steps from the root to a leaf. Unlike the s of [9, 7, 1], where all paths have the same length, in our construction different paths have different lengths.
Proof.
Without a loss of generality assume is an integer multiple of d and is an integer multiple of .
where (resp. ) is the number of left (resp. right) steps in the path. While we need a lower bound on we also derive a matching upper bound.
For the upper bound we notice that and . As is an increasing monotone function for each parameter a and b when the other is fixed, we conclude that
Also notice that there are at most legal assignments for .
For the lower bound we notice that and is a legal assignment. Thus,
Using and , we get
Thus, for we have and therefore . Similarly, for we have and therefore . I.e.,
We conclude that
proving Lemma 7.
3.2 Correctness
Lemma 8 (Following [7]).
Suppose
-
is a that -fools , and,
-
is a that -fools ,
for some and . Further assume
-
is a extractor.
Then defined by
-fools .
The main difference between Lemma 8 and the corresponding lemma in [7] is that and take different input lengths. For completeness we give the proof:
Proof.
Let be a -. The main claim is
Claim 9.
.
Proof.
Let v be the state in layer of reached after taking a walk on according to . We split our expectation into two cases.
-
We reach a vertex that has at most sources. Using the union bound over the states, we see that the probability over of this event is at most . This gives an error of at most .
-
We reach a vertex that has at least sources. In this case, the min-entropy of conditioned on is at least . Since is an extractor, from the adversary point of view, the distributions and are statistically close, and therefore this adds at most to the distance.
This proves Claim 9.
Thus,
The first expression is bounded by by Claim 9, the second expression by because fools and the third by because fools , completing the proof of Lemma 8.
Once we know how the error accumulates in a single application we can deduce how it accumulates throughout the tree:
Lemma 10 (error accumulation).
-fools for .
Proof.
References
- [1] Roy Armoni. On the derandomization of space-bounded computations. In International Workshop on Randomization and Approximation Techniques in Computer Science, pages 47–59. Springer, 1998. doi:10.1007/3-540-49543-6_5.
- [2] Eshan Chattopadhyay and Jyun-Jie Liao. Optimal error pseudodistributions for read-once branching programs. arXiv preprint arXiv:2002.07208, 2020. arXiv:2002.07208.
- [3] Kuan Cheng and Ruiyang Wu. Weighted pseudorandom generators for read-once branching programs via weighted pseudorandom reductions. arXiv preprint arXiv:2502.08272, 2025. doi:10.48550/arXiv.2502.08272.
- [4] Gil Cohen, Dean Doron, Ori Sberlo, and Amnon Ta-Shma. Approximating iterated multiplication of stochastic matrices in small space. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 35–45, 2023. doi:10.1145/3564246.3585181.
- [5] Zeev Dvir, Swastik Kopparty, Shubhangi Saraf, and Madhu Sudan. Extensions to the method of multiplicities, with applications to kakeya sets and mergers. SIAM Journal on Computing, 42(6):2305–2328, 2013. doi:10.1137/100783704.
- [6] Venkatesan Guruswami, Christopher Umans, and Salil Vadhan. Unbalanced expanders and randomness extractors from parvaresh–vardy codes. Journal of the ACM (JACM), 56(4):1–34, 2009. doi:10.1145/1538902.1538904.
- [7] Russell Impagliazzo, Noam Nisan, and Avi Wigderson. Pseudorandomness for network algorithms. In Proceedings of the twenty-sixth annual ACM symposium on Theory of computing, pages 356–364, 1994. doi:10.1145/195058.195190.
- [8] Daniel M Kane, Jelani Nelson, and David P Woodruff. Revisiting norm estimation in data streams. arXiv preprint arXiv:0811.3648, 2008. arXiv:0811.3648.
- [9] Noam Nisan. Pseudorandom generators for space-bounded computations. In Proceedings of the twenty-second annual ACM symposium on Theory of computing, pages 204–212, 1990. doi:10.1145/100216.100242.
- [10] Noam Nisan and David Zuckerman. Randomness is linear in space. Journal of Computer and System Sciences, 52(1):43–52, 1996. doi:10.1006/JCSS.1996.0004.
- [11] Amnon Ta-Shma and Christopher Umans. Better condensers and new extractors from parvaresh-vardy codes. In 2012 IEEE 27th Conference on Computational Complexity, pages 309–315. IEEE, 2012. doi:10.1109/CCC.2012.25.
