Distributional PAC-Learning from Nisan’s Natural Proofs

Author Ari Karchmer



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2024.68.pdf
  • Filesize: 0.91 MB
  • 23 pages

Document Identifiers

Author Details

Ari Karchmer
  • Boston University, MA, USA

Acknowledgements

Thanks to Mark Bun, Ran Canetti, Russell Impagliazzo, and Emanuele Viola for thoughtful conversations about this research. Thank you to Mauricio Karchmer for advice on presentational aspects of this paper. Finally, special thanks to Marco Carmosino for helpful comments on a draft of this paper, as well as many discussions pertaining to this research. Part of this research was completed while I was visiting the Simons Institute for the theory of computing.

Cite AsGet BibTex

Ari Karchmer. Distributional PAC-Learning from Nisan’s Natural Proofs. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 68:1-68:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/LIPIcs.ITCS.2024.68

Abstract

Do natural proofs imply efficient learning algorithms? Carmosino et al. (2016) demonstrated that natural proofs of circuit lower bounds for Λ imply efficient algorithms for learning Λ-circuits, but only over the uniform distribution, with membership queries, and provided AC⁰[p] ⊆ Λ. We consider whether this implication can be generalized to Λ ⊉ AC⁰[p], and to learning algorithms which use only random examples and learn over arbitrary example distributions (Valiant’s PAC-learning model). We first observe that, if, for any circuit class Λ, there is an implication from natural proofs for Λ to PAC-learning for Λ, then standard assumptions from lattice-based cryptography do not hold. In particular, we observe that depth-2 majority circuits are a (conditional) counter example to this fully general implication, since Nisan (1993) gave a natural proof, but Klivans and Sherstov (2009) showed hardness of PAC-Learning under lattice-based assumptions. We thus ask: what learning algorithms can we reasonably expect to follow from Nisan’s natural proofs? Our main result is that all natural proofs arising from a type of communication complexity argument, including Nisan’s, imply PAC-learning algorithms in a new distributional variant (i.e., an "average-case" relaxation) of Valiant’s PAC model. Our distributional PAC model is stronger than the average-case prediction model of Blum et al. (1993) and the heuristic PAC model of Nanashima (2021), and has several important properties which make it of independent interest, such as being boosting-friendly. The main applications of our result are new distributional PAC-learning algorithms for depth-2 majority circuits, polytopes and DNFs over natural target distributions, as well as the nonexistence of encoded-input weak PRFs that can be evaluated by depth-2 majority circuits.

Subject Classification

ACM Subject Classification
  • Theory of computation
Keywords
  • PAC-learning
  • average-case complexity
  • communication complexity
  • natural proofs

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. László Babai, Noam Nisan, and Márió Szegedy. Multiparty protocols, pseudorandom generators for logspace, and time-space trade-offs. Journal of Computer and System Sciences, 45(2):204-232, 1992. Google Scholar
  2. Avrim Blum, Merrick Furst, Michael Kearns, and Richard J Lipton. Cryptographic primitives based on hard learning problems. In Annual International Cryptology Conference, pages 278-291. Springer, 1993. Google Scholar
  3. Dan Boneh, Yuval Ishai, Alain Passelègue, Amit Sahai, and David J Wu. Exploring crypto dark matter: New simple prf candidates and their applications. In Theory of Cryptography: 16th International Conference, TCC 2018, Panaji, India, November 11-14, 2018, Proceedings, Part II, pages 699-729. Springer, 2018. Google Scholar
  4. Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Lisa Kohl, and Peter Scholl. Low-complexity weak pseudorandom functions in ac⁰[mod2]. In Annual International Cryptology Conference, pages 487-516. Springer, 2021. Google Scholar
  5. Marco L Carmosino, Russell Impagliazzo, Valentine Kabanets, and Antonina Kolokolova. Learning algorithms from natural proofs. In 31st Conference on Computational Complexity (CCC 2016). Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016. Google Scholar
  6. Lijie Chen. Toward super-polynomial size lower bounds for depth-two threshold circuits. arXiv preprint, 2018. URL: https://arxiv.org/abs/1805.10698.
  7. Fan RK Chung and Prasad Tetali. Communication complexity and quasi randomness. SIAM Journal on Discrete Mathematics, 6(1):110-123, 1993. Google Scholar
  8. Amit Daniely and Shai Shalev-Shwartz. Complexity theoretic limitations on learning dnf’s. In Conference on Learning Theory, pages 815-830. PMLR, 2016. Google Scholar
  9. Carlos Domingo, Osamu Watanabe, et al. Madaboost: A modification of adaboost. In COLT, pages 180-189, 2000. Google Scholar
  10. Uriel Feige. Relations between average case complexity and approximation complexity. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pages 534-543, 2002. Google Scholar
  11. Vitaly Feldman and David Xiao. Sample complexity bounds on differentially private learning via communication complexity. In Conference on Learning Theory, pages 1000-1019. PMLR, 2014. Google Scholar
  12. Halley Goldberg and Valentine Kabanets. Improved learning from kolmogorov complexity. ECCC Report, 2023. Google Scholar
  13. Oded Goldreich, Shafi Goldwasser, and Silvio Micali. How to construct random functions. Journal of the ACM (JACM), 33(4):792-807, 1986. Google Scholar
  14. Johan Håstad, Russell Impagliazzo, Leonid A Levin, and Michael Luby. A pseudorandom generator from any one-way function. SIAM Journal on Computing, 28(4):1364-1396, 1999. Google Scholar
  15. Jeffrey C Jackson, Homin K Lee, Rocco A Servedio, and Andrew Wan. Learning random monotone dnf. Discrete Applied Mathematics, 159(5):259-271, 2011. Google Scholar
  16. Jeffrey C Jackson and Rocco A Servedio. Learning random log-depth decision trees under uniform distribution. SIAM Journal on Computing, 34(5):1107-1128, 2005. Google Scholar
  17. Daniel Kane, Roi Livni, Shay Moran, and Amir Yehudayoff. On communication complexity of classification problems. In Conference on Learning Theory, pages 1903-1943. PMLR, 2019. Google Scholar
  18. Ari Karchmer. Agnostic membership query learning with nontrivial savings: New results, techniques. arXiv preprint, 2023. URL: https://arxiv.org/abs/2311.06690.
  19. Ari Karchmer. Distributional pac-learning from nisan’s natural proofs, 2023. URL: https://arxiv.org/abs/2310.03641.
  20. Michael Kearns and Leslie Valiant. Cryptographic limitations on learning boolean formulae and finite automata. Journal of the ACM (JACM), 41(1):67-95, 1994. Google Scholar
  21. Adam R Klivans, Ryan O'Donnell, and Rocco A Servedio. Learning intersections and thresholds of halfspaces. Journal of Computer and System Sciences, 68(4):808-840, 2004. Google Scholar
  22. Adam R Klivans and Alexander A Sherstov. Cryptographic hardness for learning intersections of halfspaces. Journal of Computer and System Sciences, 75(1):2-12, 2009. Google Scholar
  23. Ilan Kremer, Noam Nisan, and Dana Ron. On randomized one-round communication complexity. Computational Complexity, 8:21-49, 1999. Google Scholar
  24. Eyal Kushilevitz and Noam Nisan. Communication complexity, 1996. Google Scholar
  25. Nati Linial and Adi Shraibman. Learning complexity vs communication complexity. Combinatorics, Probability and Computing, 18(1-2):227-245, 2009. Google Scholar
  26. Mikito Nanashima. A theory of heuristic learnability. In Conference on Learning Theory, pages 3483-3525. PMLR, 2021. Google Scholar
  27. Noam Nisan. The communication complexity of threshold gates. Combinatorics, Paul Erdos is Eighty, 1:301-315, 1993. Google Scholar
  28. Ran Raz. The bns-chung criterion for multi-party communication complexity. Computational Complexity, 9(2):113-122, 2000. Google Scholar
  29. Alexander A Razborov. Lower bounds on the size of bounded depth circuits over a complete basis with logical addition. Mathematical Notes of the Academy of Sciences of the USSR, 41(4):333-338, 1987. Google Scholar
  30. Alexander A Razborov and Steven Rudich. Natural proofs. Journal of Computer and System Sciences, 55(1):24-35, 1997. Google Scholar
  31. Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. Journal of the ACM (JACM), 56(6):1-40, 2009. Google Scholar
  32. Oded Regev. On the complexity of lattice problems with polynomial approximation factors. In The LLL Algorithm: Survey and Applications, pages 475-496. Springer, 2009. Google Scholar
  33. Robert E. Schapire. The strength of weak learnability. Mach. Learn., 5:197-227, 1990. URL: https://doi.org/10.1007/BF00116037.
  34. Linda Sellie. Exact learning of random dnf over the uniform distribution. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 45-54, 2009. Google Scholar
  35. Roman Smolensky. Algebraic methods in the theory of lower bounds for boolean circuit complexity. In Proceedings of the nineteenth annual ACM symposium on Theory of computing, pages 77-82, 1987. Google Scholar
  36. Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134-1142, 1984. Google Scholar
  37. Emanuele Viola. The communication complexity of addition. Combinatorica, 35:703-747, 2015. Google Scholar
  38. Emanuele Viola and Avi Wigderson. Norms, xor lemmas, and lower bounds for gf (2) polynomials and multiparty protocols. In Twenty-Second Annual IEEE Conference on Computational Complexity (CCC'07), pages 141-154. IEEE, 2007. Google Scholar
  39. Andrew C Yao. Theory and application of trapdoor functions. In 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982), pages 80-91. IEEE, 1982. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail