Abstract
We give the first representationindependent hardness result for agnostically learning halfspaces with respect to the Gaussian distribution. We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in theoretical computer science and show that any algorithm for agnostically learning halfspaces requires n^Omega(log(1/\epsilon)) time under the assumption that ksparse LPN requires n^Omega(k) time, ruling out a polynomial time algorithm for the problem. As far as we are aware, this is the first representationindependent hardness result for supervised learning when the underlying distribution is restricted to be a Gaussian.
We also show that the problem of agnostically learning sparse polynomials with respect to the Gaussian distribution in polynomial time is as hard as PAC learning DNFs on the uniform distribution in polynomial time. This complements the surprising result of Andoni et. al. 2013 who show that sparse polynomials are learnable under random Gaussian noise in polynomial time.
Taken together, these results show the inherent difficulty of designing supervised learning algorithms in Euclidean space even in the presence of strong distributional assumptions. Our results use a novel embedding of random labeled examples from the uniform distribution on the Boolean hypercube into random labeled examples from the Gaussian distribution that allows us to relate the hardness of learning problems on two different domains and distributions.
BibTeX  Entry
@InProceedings{klivans_et_al:LIPIcs:2014:4739,
author = {Adam Klivans and Pravesh Kothari},
title = {{Embedding Hard Learning Problems Into Gaussian Space}},
booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
pages = {793809},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {9783939897743},
ISSN = {18688969},
year = {2014},
volume = {28},
editor = {Klaus Jansen and Jos{\'e} D. P. Rolim and Nikhil R. Devanur and Cristopher Moore},
publisher = {Schloss DagstuhlLeibnizZentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2014/4739},
URN = {urn:nbn:de:0030drops47391},
doi = {10.4230/LIPIcs.APPROXRANDOM.2014.793},
annote = {Keywords: distributionspecific hardness of learning, gaussian space, halfspacelearning, agnostic learning}
}
Keywords: 

distributionspecific hardness of learning, gaussian space, halfspacelearning, agnostic learning 
Collection: 

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014) 
Issue Date: 

2014 
Date of publication: 

04.09.2014 