Entropy Samplers and Strong Generic Lower Bounds For Space Bounded Learning

Authors Dana Moshkovitz, Michal Moshkovitz



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2018.28.pdf
  • Filesize: 0.56 MB
  • 20 pages

Document Identifiers

Author Details

Dana Moshkovitz
Michal Moshkovitz

Cite AsGet BibTex

Dana Moshkovitz and Michal Moshkovitz. Entropy Samplers and Strong Generic Lower Bounds For Space Bounded Learning. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)
https://doi.org/10.4230/LIPIcs.ITCS.2018.28

Abstract

With any hypothesis class one can associate a bipartite graph whose vertices are the hypotheses H on one side and all possible labeled examples X on the other side, and an hypothesis is connected to all the labeled examples that are consistent with it. We call this graph the hypotheses graph. We prove that any hypothesis class whose hypotheses graph is mixing cannot be learned using less than Omega(log^2 |H|) memory bits unless the learner uses at least a large number |H|^Omega(1) labeled examples. Our work builds on a combinatorial framework that we suggested in a previous work for proving lower bounds on space bounded learning. The strong lower bound is obtained by defining a new notion of pseudorandomness, the entropy sampler. Raz obtained a similar result using different ideas.
Keywords
  • learning
  • space bound
  • mixing
  • certainty
  • entropy sampler

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. P. Beame, S. O. Gharan, and X. Yang. Time-space tradeoffs for learning from small test spaces: Learning low degree polynomial functions. Technical report, ECCC, 2017. Google Scholar
  2. B. Chazelle. The Discrepancy Method: Randomness and Complexity. Randomness and Complexity. Cambridge University Press, 2000. Google Scholar
  3. S. Garg, R. Raz, and A. Tal. Extractor-based time-space lower bounds for learning. Technical report, ECCC, 2017. Google Scholar
  4. G. Kol, R. Raz, and A. Tal. Time-space hardness of learning sparse parities. In Proc. 49th ACM Symp. on Theory of Computing, 2017. Google Scholar
  5. M. Krivelevich and B. Sudakov. Pseudo-random graphs. In More sets, graphs and numbers, pages 199-262. Springer, 2006. Google Scholar
  6. D. Moshkovitz and M. Moshkovitz. Mixing implies lower bounds for space bounded learning. Technical report, ECCC Report TR17-017, 2017. Google Scholar
  7. D. Moshkovitz and M. Moshkovitz. Mixing implies strong lower bounds for space bounded learning. Technical Report TR17-116, ECCC, 2017. Google Scholar
  8. R. Raz. Fast learning requires good memory: A time-space lower bound for parity learning. In Proc. 57th IEEE Symp. on Foundations of Computer Science, 2016. Google Scholar
  9. R. Raz. A time-space lower bound for a large class of learning problems. In Proc. 58th IEEE Symp. on Foundations of Computer Science, 2017. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail