Goldwasser, Shafi ;
Rothblum, Guy N. ;
Shafer, Jonathan ;
Yehudayoff, Amir
Interactive Proofs for Verifying Machine Learning
Abstract
We consider the following question: using a source of labeled data and interaction with an untrusted prover, what is the complexity of verifying that a given hypothesis is "approximately correct"? We study interactive proof systems for PAC verification, where a verifier that interacts with a prover is required to accept good hypotheses, and reject bad hypotheses. Both the verifier and the prover are efficient and have access to labeled data samples from an unknown distribution. We are interested in cases where the verifier can use significantly less data than is required for (agnostic) PAC learning, or use a substantially cheaper data source (e.g., using only random samples for verification, even though learning requires membership queries). We believe that today, when data and datadriven algorithms are quickly gaining prominence, the question of verifying purported outcomes of data analyses is very wellmotivated.
We show three main results. First, we prove that for a specific hypothesis class, verification is significantly cheaper than learning in terms of sample complexity, even if the verifier engages with the prover only in a singleround (NPlike) protocol. Moreover, for this class we prove that singleround verification is also significantly cheaper than testing closeness to the class. Second, for the broad class of Fouriersparse boolean functions, we show a multiround (IPlike) verification protocol, where the prover uses membership queries, and the verifier is able to assess the result while only using random samples. Third, we show that verification is not always more efficient. Namely, we show a class of functions where verification requires as many samples as learning does, up to a logarithmic factor.
BibTeX  Entry
@InProceedings{goldwasser_et_al:LIPIcs.ITCS.2021.41,
author = {Shafi Goldwasser and Guy N. Rothblum and Jonathan Shafer and Amir Yehudayoff},
title = {{Interactive Proofs for Verifying Machine Learning}},
booktitle = {12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
pages = {41:141:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {9783959771771},
ISSN = {18688969},
year = {2021},
volume = {185},
editor = {James R. Lee},
publisher = {Schloss DagstuhlLeibnizZentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2021/13580},
URN = {urn:nbn:de:0030drops135806},
doi = {10.4230/LIPIcs.ITCS.2021.41},
annote = {Keywords: PAC learning, Fourier analysis of boolean functions, Complexity gaps, Complexity lower bounds, GoldreichLevin algorithm, KushilevitzMansour algorithm, Distribution testing}
}
04.02.2021
Keywords: 

PAC learning, Fourier analysis of boolean functions, Complexity gaps, Complexity lower bounds, GoldreichLevin algorithm, KushilevitzMansour algorithm, Distribution testing 
Seminar: 

12th Innovations in Theoretical Computer Science Conference (ITCS 2021)

Issue date: 

2021 
Date of publication: 

04.02.2021 