The Random-Query Model and the Memory-Bounded Coupon Collector

Authors Ran Raz, Wei Zhan



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2020.20.pdf
  • Filesize: 429 kB
  • 11 pages

Document Identifiers

Author Details

Ran Raz
  • Department of Computer Science, Princeton University, NJ, United States
Wei Zhan
  • Department of Computer Science, Princeton University, NJ, United States

Cite AsGet BibTex

Ran Raz and Wei Zhan. The Random-Query Model and the Memory-Bounded Coupon Collector. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 20:1-20:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)
https://doi.org/10.4230/LIPIcs.ITCS.2020.20

Abstract

We study a new model of space-bounded computation, the random-query model. The model is based on a branching-program over input variables x_1,…,x_n. In each time step, the branching program gets as an input a random index i ∈ {1,…,n}, together with the input variable x_i (rather than querying an input variable of its choice, as in the case of a standard (oblivious) branching program). We motivate the new model in various ways and study time-space tradeoff lower bounds in this model. Our main technical result is a quadratic time-space lower bound for zero-error computations in the random-query model, for XOR, Majority and many other functions. More precisely, a zero-error computation is a computation that stops with high probability and such that conditioning on the event that the computation stopped, the output is correct with probability 1. We prove that for any Boolean function f: {0,1}^n → {0,1}, with sensitivity k, any zero-error computation with time T and space S, satisfies T ⋅ (S+log n) ≥ Ω(n⋅k). We note that the best time-space lower bounds for standard oblivious branching programs are only slightly super linear and improving these bounds is an important long-standing open problem. To prove our results, we study a memory-bounded variant of the coupon-collector problem that seems to us of independent interest and to the best of our knowledge has not been studied before. We consider a zero-error version of the coupon-collector problem. In this problem, the coupon-collector could explicitly choose to stop when he/she is sure with zero-error that all coupons have already been collected. We prove that any zero-error coupon-collector that stops with high probability in time T, and uses space S, satisfies T⋅(S+log n) ≥ Ω(n^2), where n is the number of different coupons.

Subject Classification

ACM Subject Classification
  • Theory of computation → Models of computation
Keywords
  • random-query model
  • time-space trade-offs
  • branching programs

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Miklós Ajtai. A Non-linear Time Lower Bound for Boolean Branching Programs. In 40th Annual Symposium on Foundations of Computer Science, FOCS '99, 17-18 October, 1999, New York, NY, USA, pages 60-70, 1999. URL: https://doi.org/10.1109/SFFCS.1999.814578.
  2. Miklós Ajtai. Determinism versus Non-Determinism for Linear Time RAMs (Extended Abstract). In Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, May 1-4, 1999, Atlanta, Georgia, USA, pages 632-641, 1999. URL: https://doi.org/10.1145/301250.301424.
  3. László Babai, Noam Nisan, and Mario Szegedy. Multiparty Protocols, Pseudorandom Generators for Logspace, and Time-Space Trade-Offs. J. Comput. Syst. Sci., 45(2):204-232, 1992. URL: https://doi.org/10.1016/0022-0000(92)90047-M.
  4. Paul Beame, Shayan Oveis Gharan, and Xin Yang. Time-Space Tradeoffs for Learning Finite Functions from Random Evaluations, with Applications to Polynomials. In Conference On Learning Theory, COLT 2018, Stockholm, Sweden, 6-9 July 2018, pages 843-856, 2018. Google Scholar
  5. Paul Beame, T. S. Jayram, and Michael E. Saks. Time-Space Tradeoffs for Branching Programs. J. Comput. Syst. Sci., 63(4):542-572, 2001. URL: https://doi.org/10.1006/jcss.2001.1778.
  6. Paul Beame, Michael E. Saks, Xiaodong Sun, and Erik Vee. Time-space trade-off lower bounds for randomized computation of decision problems. J. ACM, 50(2):154-195, 2003. URL: https://doi.org/10.1145/636865.636867.
  7. Yuval Dagan and Ohad Shamir. Detecting Correlations with Little Memory and Communication. In Conference On Learning Theory, COLT 2018, Stockholm, Sweden, 6-9 July 2018, pages 1145-1198, 2018. Google Scholar
  8. Sumegha Garg, Ran Raz, and Avishay Tal. Extractor-based time-space lower bounds for learning. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 990-1002, 2018. URL: https://doi.org/10.1145/3188745.3188962.
  9. Sumegha Garg, Ran Raz, and Avishay Tal. Time-Space Lower Bounds for Two-Pass Learning. In 34th Computational Complexity Conference, CCC 2019, July 18-20, 2019, New Brunswick, NJ, USA, pages 22:1-22:39, 2019. URL: https://doi.org/10.4230/LIPIcs.CCC.2019.22.
  10. Michael Kapralov, Slobodan Mitrovic, Ashkan Norouzi-Fard, and Jakab Tardos. Space Efficient Approximation to Maximum Matching Size from Uniform Edge Samples. CoRR, abs/1907.05725, 2019. URL: http://arxiv.org/abs/1907.05725.
  11. Gillat Kol, Ran Raz, and Avishay Tal. Time-space hardness of learning sparse parities. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 1067-1080, 2017. URL: https://doi.org/10.1145/3055399.3055430.
  12. Dana Moshkovitz and Michal Moshkovitz. Mixing Implies Lower Bounds for Space Bounded Learning. In Proceedings of the 30th Conference on Learning Theory, COLT 2017, Amsterdam, The Netherlands, 7-10 July 2017, pages 1516-1566, 2017. Google Scholar
  13. Dana Moshkovitz and Michal Moshkovitz. Entropy Samplers and Strong Generic Lower Bounds For Space Bounded Learning. In 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, pages 28:1-28:20, 2018. URL: https://doi.org/10.4230/LIPIcs.ITCS.2018.28.
  14. Ran Raz. Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning. In IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 266-275, 2016. URL: https://doi.org/10.1109/FOCS.2016.36.
  15. Ran Raz. A Time-Space Lower Bound for a Large Class of Learning Problems. In 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 732-742, 2017. URL: https://doi.org/10.1109/FOCS.2017.73.
  16. Ohad Shamir. Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 163-171, 2014. Google Scholar
  17. Vatsal Sharan, Aaron Sidford, and Gregory Valiant. Memory-sample tradeoffs for linear regression with small error. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pages 890-901, 2019. URL: https://doi.org/10.1145/3313276.3316403.
  18. Jacob Steinhardt, Gregory Valiant, and Stefan Wager. Memory, Communication, and Statistical Queries. In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016, pages 1490-1516, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail