LIPIcs.CCC.2019.22.pdf
- Filesize: 0.7 MB
- 39 pages
A line of recent works showed that for a large class of learning problems, any learning algorithm requires either super-linear memory size or a super-polynomial number of samples [Raz, 2016; Kol et al., 2017; Raz, 2017; Moshkovitz and Moshkovitz, 2018; Beame et al., 2018; Garg et al., 2018]. For example, any algorithm for learning parities of size n requires either a memory of size Omega(n^{2}) or an exponential number of samples [Raz, 2016]. All these works modeled the learner as a one-pass branching program, allowing only one pass over the stream of samples. In this work, we prove the first memory-samples lower bounds (with a super-linear lower bound on the memory size and super-polynomial lower bound on the number of samples) when the learner is allowed two passes over the stream of samples. For example, we prove that any two-pass algorithm for learning parities of size n requires either a memory of size Omega(n^{1.5}) or at least 2^{Omega(sqrt{n})} samples. More generally, a matrix M: A x X - > {-1,1} corresponds to the following learning problem: An unknown element x in X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2) ..., where for every i, a_i in A is chosen uniformly at random and b_i = M(a_i,x). Assume that k,l, r are such that any submatrix of M of at least 2^{-k} * |A| rows and at least 2^{-l} * |X| columns, has a bias of at most 2^{-r}. We show that any two-pass learning algorithm for the learning problem corresponding to M requires either a memory of size at least Omega (k * min{k,sqrt{l}}), or at least 2^{Omega(min{k,sqrt{l},r})} samples.
Feedback for Dagstuhl Publishing