Locally Covert Learning

Authors Justin Holmgren, Ruta Jawale



PDF
Thumbnail PDF

File

LIPIcs.ITC.2023.14.pdf
  • Filesize: 0.6 MB
  • 12 pages

Document Identifiers

Author Details

Justin Holmgren
  • NTT Research, Sunnyvale, CA, USA
Ruta Jawale
  • University of Illinois at Urbana-Champaign, IL, USA

Cite AsGet BibTex

Justin Holmgren and Ruta Jawale. Locally Covert Learning. In 4th Conference on Information-Theoretic Cryptography (ITC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 267, pp. 14:1-14:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)
https://doi.org/10.4230/LIPIcs.ITC.2023.14

Abstract

The goal of a covert learning algorithm is to learn a function f by querying it, while ensuring that an adversary, who sees all queries and their responses, is unable to (efficiently) learn any more about f than they could learn from random input-output pairs. We focus on a relaxation that we call local covertness, in which queries are distributed across k servers and we only limit what is learnable by k - 1 colluding servers. For any constant k, we give a locally covert algorithm for efficiently learning any Fourier-sparse function (technically, our notion of learning is improper, agnostic, and with respect to the uniform distribution). Our result holds unconditionally and for computationally unbounded adversaries. Prior to our work, such an algorithm was known only for the special case of O(log n)-juntas, and only with k = 2 servers [Yuval Ishai et al., 2019]. Our main technical observation is that the original Goldreich-Levin algorithm only utilizes i.i.d. pairs of correlated queries, where each half of every pair is uniformly random. We give a simple generalization of this algorithm in which pairs are replaced by k-tuples in which any k - 1 components are jointly uniform. The cost of this generalization is that the number of queries needed grows exponentially with k.

Subject Classification

ACM Subject Classification
  • Security and privacy → Information-theoretic techniques
  • Theory of computation → Machine learning theory
Keywords
  • learning theory
  • adversarial machine learning
  • zero knowledge
  • Fourier analysis of boolean functions
  • Goldreich-Levin algorithm
  • Kushilevitz-Mansour algorithm

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Mihir Bellare. The goldreich-levin theorem, October 1999. Lecture notes, available at URL: https://cseweb.ucsd.edu/~mihir/papers/gl.pdf.
  2. Avrim Blum. Learning a function of r relevant variables. In Bernhard Schölkopf and Manfred K. Warmuth, editors, Computational Learning Theory and Kernel Machines, 16th Annual Conference on Computational Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003, Proceedings, volume 2777 of Lecture Notes in Computer Science, pages 731-733. Springer, 2003. URL: https://doi.org/10.1007/978-3-540-45167-9_54.
  3. Avrim Blum, Adam Kalai, and Hal Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. J. ACM, 50(4):506-519, 2003. Google Scholar
  4. Ran Canetti and Ari Karchmer. Covert learning: How to learn with an untrusted intermediary. In Kobbi Nissim and Brent Waters, editors, Theory of Cryptography - 19th International Conference, TCC 2021, Raleigh, NC, USA, November 8-11, 2021, Proceedings, Part III, volume 13044 of Lecture Notes in Computer Science, pages 1-31. Springer, 2021. URL: https://doi.org/10.1007/978-3-030-90456-2_1.
  5. Oded Goldreich and Leonid A. Levin. A hard-core predicate for all one-way functions. In David S. Johnson, editor, Proceedings of the 21st Annual ACM Symposium on Theory of Computing, May 14-17, 1989, Seattle, Washington, USA, pages 25-32. ACM, 1989. URL: https://doi.org/10.1145/73007.73010.
  6. Shafi Goldwasser and Silvio Micali. Probabilistic encryption and how to play mental poker keeping secret all partial information. In STOC, pages 365-377. ACM, 1982. Google Scholar
  7. Shafi Goldwasser, Guy N. Rothblum, Jonathan Shafer, and Amir Yehudayoff. Interactive proofs for verifying machine learning. In James R. Lee, editor, 12th Innovations in Theoretical Computer Science Conference, ITCS 2021, January 6-8, 2021, Virtual Conference, volume 185 of LIPIcs, pages 41:1-41:19. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021. URL: https://doi.org/10.4230/LIPIcs.ITCS.2021.41.
  8. Russell Impagliazzo and Michael Luby. One-way functions are essential for complexity based cryptography (extended abstract). In FOCS, pages 230-235. IEEE Computer Society, 1989. Google Scholar
  9. Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai. Cryptographic sensing. In CRYPTO (3), volume 11694 of Lecture Notes in Computer Science, pages 583-604. Springer, 2019. Google Scholar
  10. Eyal Kushilevitz and Yishay Mansour. Learning decision trees using the fourier spectrum. SIAM J. Comput., 22(6):1331-1348, 1993. URL: https://doi.org/10.1137/0222080.
  11. Yishay Mansour. Learning boolean functions via the fourier transform. Theoretical advances in neural computation and learning, pages 391-424, 1994. Google Scholar
  12. Ryan O'Donnell. Analysis of boolean functions. Cambridge University Press, 2014. Available online at URL: https://arxiv.org/abs/2105.10386.
  13. Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. In STOC, pages 84-93. ACM, 2005. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail