Document

# Agnostically Learning Boolean Functions with Finite Polynomial Representation

## File

LIPIcs.ISAAC.2017.29.pdf
• Filesize: 485 kB
• 11 pages

## Cite As

Ning Ding. Agnostically Learning Boolean Functions with Finite Polynomial Representation. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 29:1-29:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)
https://doi.org/10.4230/LIPIcs.ISAAC.2017.29

## Abstract

Agnostic learning is an extremely hard task in computational learning theory. In this paper we revisit the results in [Kalai et al. SIAM J. Comput. 2008] on agnostically learning boolean functions with finite polynomial representation and those that can be approximated by the former. An example of the former is the class of all boolean low-degree polynomials. For the former, [Kalai et al. SIAM J. Comput. 2008] introduces the l_1-polynomial regression method to learn them to error opt+epsilon. We present a simple instantiation for one step in the method and accordingly give the analysis. Moreover, we show that even ignoring this step can bring a learning result of error 2opt+epsilon as well. Then we consider applying the result for learning concept classes that can be approximated by the former to learn richer specific classes. Our result is that the class of s-term DNF formulae can be agnostically learned to error opt+epsilon with respect to arbitrary distributions for any epsilon in time poly(n^d, 1/epsilon), where d=O(\sqrt{n}\cdot s\cdot \log s\log^2(1/epsilon)).
##### Keywords
• Agnostic Learning
• Boolean Functions
• Low-Degree Polynomials

## Metrics

• Access Statistics
• Total Accesses (updated on a weekly basis)
0

## References

1. Pranjal Awasthi, Maria-Florina Balcan, and Philip M. Long. The power of localization for efficiently learning linear separators with noise. In David B. Shmoys, editor, Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 449-458. ACM, 2014. URL: http://dx.doi.org/10.1145/2591796.2591839.
2. Richard Beigel. When do extra majority gates help? polylog(n) majority gates are equivalent to one. Computational Complexity, 4:314-324, 1994.
3. Nader H. Bshouty and Christino Tamon. On the fourier spectrum of monotone functions. J. ACM, 43(4):747-770, 1996. URL: http://dx.doi.org/10.1145/234533.234564.
4. T. M. Cover. Capacity problems for linear machines. Patten Recognition, pages 283-289, 1968.
5. Amit Daniely. A PTAS for agnostically learning halfspaces. In Peter Grünwald, Elad Hazan, and Satyen Kale, editors, Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, volume 40 of JMLR Workshop and Conference Proceedings, pages 484-502. JMLR.org, 2015. URL: http://jmlr.org/proceedings/papers/v40/Daniely15.html.
6. Parikshit Gopalan, Adam Tauman Kalai, and Adam R. Klivans. Agnostically learning decision trees. In Cynthia Dwork, editor, Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008, pages 527-536. ACM, 2008. URL: http://dx.doi.org/10.1145/1374376.1374451.
7. Parikshit Gopalan and Rocco A. Servedio. Learning and lower bounds for ac^0 with threshold gates. In Maria J. Serna, Ronen Shaltiel, Klaus Jansen, and José D. P. Rolim, editors, APPROX-RANDOM, volume 6302 of Lecture Notes in Computer Science, pages 588-601. Springer, 2010.
8. David Haussler. Decision theoretic generalizations of the pac model for neural net and other learning applications. Inf. Comput., 100(1):78-150, 1992.
9. Lisa Hellerstein and Rocco A. Servedio. On PAC learning algorithms for rich boolean function classes. Theor. Comput. Sci., 384(1):66-76, 2007. URL: http://dx.doi.org/10.1016/j.tcs.2007.05.018.
10. Jeffrey C. Jackson, Adam Klivans, and Rocco A. Servedio. Learnability beyond ac0. In IEEE Conference on Computational Complexity, page 26. IEEE Computer Society, 2002.
11. Adam Tauman Kalai, Adam R. Klivans, Yishay Mansour, and Rocco A. Servedio. Agnostically learning halfspaces. SIAM J. Comput., 37(6):1777-1805, 2008. URL: http://dx.doi.org/10.1137/060649057.
12. Michael J. Kearns, Robert E. Schapire, and Linda Sellie. Toward efficient agnostic learning. Machine Learning, 17(2-3):115-141, 1994.
13. Adam R. Klivans, Ryan O'Donnell, and Rocco A. Servedio. Learning intersections and thresholds of halfspaces. J. Comput. Syst. Sci., 68(4):808-840, 2004. URL: http://dx.doi.org/10.1016/j.jcss.2003.11.002.
14. Adam R. Klivans and Rocco A. Servedio. Learning DNF in time 2^õ(n^1/3). In Jeffrey Scott Vitter, Paul G. Spirakis, and Mihalis Yannakakis, editors, Proceedings on 33rd Annual ACM Symposium on Theory of Computing, July 6-8, 2001, Heraklion, Crete, Greece, pages 258-265. ACM, 2001. URL: http://dx.doi.org/10.1145/380752.380809.
15. Nathan Linial, Yishay Mansour, and Noam Nisan. Constant depth circuits, fourier transform, and learnability. J. ACM, 40(3):607-620, 1993.
16. Yishay Mansour. An o(n^(log log n)) learning algorithm for DNT under the uniform distribution. J. Comput. Syst. Sci., 50(3):543-550, 1995. URL: http://dx.doi.org/10.1006/jcss.1995.1043.
17. Noam Nisan and Mario Szegedy. On the degree of boolean functions as real polynomials. Computational Complexity, 4:301-313, 1994. URL: http://dx.doi.org/10.1007/BF01263419.
18. Leslie G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, 1984.
19. V.N.Vapnik and A.Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264-280, 1971.
X

Feedback for Dagstuhl Publishing