On the Hardness of Learning Sparse Parities

Authors Arnab Bhattacharyya, Ameet Gadekar, Suprovat Ghoshal, Rishi Saket



PDF
Thumbnail PDF

File

LIPIcs.ESA.2016.11.pdf
  • Filesize: 0.63 MB
  • 17 pages

Document Identifiers

Author Details

Arnab Bhattacharyya
Ameet Gadekar
Suprovat Ghoshal
Rishi Saket

Cite As Get BibTex

Arnab Bhattacharyya, Ameet Gadekar, Suprovat Ghoshal, and Rishi Saket. On the Hardness of Learning Sparse Parities. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016) https://doi.org/10.4230/LIPIcs.ESA.2016.11

Abstract

This work investigates the hardness of computing sparse solutions to systems of linear equations over F_2. Consider the k-EventSet problem: given a homogeneous system of linear equations over $\F_2$ on $n$ variables, decide if there exists a nonzero solution of Hamming weight at most k (i.e. a k-sparse solution). While there is a simple O(n^{k/2})-time algorithm for it, establishing fixed parameter intractability for k-EventSet has been a notorious open problem. Towards this goal, we show that unless \kclq can be solved in n^{o(k)} time, k-EventSet has no polynomial time algorithm when k = omega(log^2(n)).

Our work also shows that the non-homogeneous generalization of the problem - which we call k-VectorSum - is W[1]-hard on instances where the number of equations is O(k*log(n)), improving on previous reductions which produced Omega(n) equations. We use the hardness of k-VectorSum as a starting point to prove the result for k-EventSet, and additionally strengthen the former to show the hardness of approximately learning k-juntas. In particular, we prove that given a system of O(exp(O(k))*log(n)) linear equations, it is W[1]-hard to decide if there is a k-sparse linear form satisfying all the equations or any function on at most k-variables (a k-junta) satisfies at most (1/2 + epsilon)-fraction of the equations, for any constant epsilon > 0. In the setting of computational learning, this shows hardness of approximate non-proper learning of k-parities.

In a similar vein, we use the hardness of k-EventSet to show that that for any constant d, unless k-Clique can be solved in n^{o(k)} time, there is no poly(m,n)*2^{o(sqrt{k})} time algorithm to decide whether a given set of $m$ points in F_2^n satisfies: (i) there exists a non-trivial k-sparse homogeneous linear form evaluating to 0 on all the points, or (ii) any non-trivial degree d polynomial P supported on at most k variables evaluates to zero on approx Pr_{F_2^n}[P({z}) = 0] fraction of the points i.e., P is fooled by the set of points.

Lastly, we study the approximation in the sparsity of the solution. Let the Gap-k-VectorSum problem be: given an instance of k-VectorSum of size n, decide if there exist a k-sparse solution, or every solution is of sparsity at least k' = (1+delta_0)k. Assuming the Exponential Time Hypothesis, we show that for some constants c_0, delta_0 > 0 there is no poly(n) time algorithm for Gap-k-VectorSum when k = omega((log(log( n)))^{c_0}).

Subject Classification

Keywords
  • Fixed Parameter Tractable
  • Juntas
  • Minimum Distance of Code
  • Psuedorandom Generators

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Amir Abboud, Kevin Lewi, and Ryan Williams. Losing weight by gaining edges. In Proc. 22nd Annual European Symposium on Algorithms, pages 1-12. Springer, 2014. Google Scholar
  2. Noga Alon, Jehoshua Bruck, Joseph Naor, Moni Naor, and Ron M Roth. Construction of asymptotically good low-rate error-correcting codes through pseudo-random graphs. IEEE Trans. Inform. Theory, 38(2):509-516, 1992. Google Scholar
  3. Vikraman Arvind, Johannes Köbler, and Wolfgang Lindner. Parameterized learnability of juntas. Theor. Comp. Sci., 410(47-49):4928-4936, 2009. Google Scholar
  4. Per Austrin and Subhash Khot. A simple deterministic reduction for the gap minimum distance of code problem. IEEE Trans. Inform. Theory, 60(10):6636-6645, 2014. Google Scholar
  5. Arnab Bhattacharyya, Ameet Gadekar, Suprovat Ghoshal, and Rishi Saket. On the hardness of learning sparse parities. CoRR, abs/1511.08270, 2015. URL: http://arxiv.org/abs/1511.08270.
  6. Arnab Bhattacharyya, Piotr Indyk, David P Woodruff, and Ning Xie. The complexity of linear dependence problems in vector spaces. In Proc. 2nd Innovations in Computer Science, pages 496-508, 2011. Google Scholar
  7. Avrim Blum. On-line algorithms in machine learning. In Workshop on on-line algorithms, Dagstuhl, pages 305-325. Springer, 1996. Google Scholar
  8. Edouard Bonnet, Bruno Escoffier, Eun Jung Kim, and Vangelis Th. Paschos. On subexponential and fpt-time inapproximability. Algorithmica, 71(3):541-565, 2015. Google Scholar
  9. R. C. Bose and Dwijendra K. Ray-Chaudhuri. On A class of error correcting binary group codes. Information and Control, 3(1):68-79, 1960. Google Scholar
  10. Chris Calabro, Russell Impagliazzo, and Ramamohan Paturi. A duality between clause width and clause density for SAT. In 21st Annual IEEE Conference on Computational Complexity, pages 252-260, 2006. Google Scholar
  11. Qi Cheng and Daqing Wan. Complexity of decoding positive-rate reed-solomon codes. In Proc. 35th Annual International Conference on Automata, Languages, and Programming, pages 283-293. Springer, 2008. Google Scholar
  12. Qi Cheng and Daqing Wan. A deterministic reduction for the gap minimum distance problem. In Proc. 41st Annual ACM Symposium on the Theory of Computing, pages 33-38. ACM, 2009. Google Scholar
  13. Marek Cygan, Fedor V. Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer International Publishing, 2015. Google Scholar
  14. Irit Dinur. The PCP theorem by gap amplification. J. ACM, 54(3), 2007. Google Scholar
  15. Rod G Downey, Michael R Fellows, Alexander Vardy, and Geoff Whittle. The parametrized complexity of some fundamental problems in coding theory. SIAM J. on Comput., 29(2):545-570, 1999. Google Scholar
  16. Rodney G. Downey and Michael R. Fellows. Fixed-parameter tractability and completeness I: basic results. SIAM J. on Comput., 24(4):873-921, 1995. Google Scholar
  17. Rodney G Downey and Michael Ralph Fellows. Parameterized complexity. Springer Science &Business Media, 1999. Google Scholar
  18. Ilya Dumer, Daniele Micciancio, and Madhu Sudan. Hardness of approximating the minimum distance of a linear code. IEEE Trans. Inform. Theory, 49(1):22-37, 2003. Google Scholar
  19. Michael R Fellows, Jiong Guo, Dániel Marx, and Saket Saurabh. Data reduction and problem kernels (Dagstuhl Seminar 12241). Dagstuhl Reports, 2(6):26-50, 2012. Google Scholar
  20. Jörg Flum and Martin Grohe. Parameterized Complexity Theory. Springer Verlag, 2006. Google Scholar
  21. Fedor V Fomin and Dániel Marx. FPT suspects and tough customers: Open problems of Downey and Fellows. In The Multivariate Algorithmic Revolution and Beyond, pages 457-468. Springer, 2012. Google Scholar
  22. Parikshit Gopalan, Subhash Khot, and Rishi Saket. Hardness of reconstructing multivariate polynomials over finite fields. SIAM J. on Comput., 39(6):2598-2621, 2010. Google Scholar
  23. Johan Håstad. Some optimal inapproximability results. J. ACM, 48(4):798-859, 2001. Google Scholar
  24. Russell Impagliazzo and Ramamohan Paturi. Complexity of k-SAT. In Proc. 14th Annual IEEE Conference on Computational Complexity, pages 237-240. IEEE, 1999. Google Scholar
  25. Adam Tauman Kalai, Yishay Mansour, and Elad Verbin. On agnostic boosting and parity learning. In Proc. 40th Annual ACM Symposium on the Theory of Computing, pages 629-638. ACM, 2008. Google Scholar
  26. Subhash Khot. personal communication, 2009. Google Scholar
  27. Subhash Khot and Igor Shinkar. On hardness of approximating the parameterized clique problem. In Proc. 7th Innovations in Theoretical Computer Science, pages 37-45. ACM, 2016. Google Scholar
  28. Adam R Klivans and Rocco A Servedio. Toward attribute efficient learning of decision lists and parities. J. Mach. Learn. Res., 7:587-602, 2006. Google Scholar
  29. Dániel Marx. Parameterized complexity and approximation algorithms. The Computer Journal, 51(1):60-78, 2008. Google Scholar
  30. Daniele Micciancio. Locally dense codes. In Proc. 29th Annual IEEE Conference on Computational Complexity, pages 90-97. IEEE, 2014. Google Scholar
  31. Elchanan Mossel, Ryan O'Donnell, and Rocco A. Servedio. Learning functions of k relevant variables. J. Comp. Sys. Sci., 69(3):421-434, November 2004. Google Scholar
  32. Gregory Valiant. Finding correlations in subquadratic time, with applications to learning parities and juntas. In Proc. 53rd Annual IEEE Symposium on Foundations of Computer Science, pages 11-20. IEEE, 2012. Google Scholar
  33. Alexander Vardy. The intractability of computing the minimum distance of a code. IEEE Trans. Inform. Theory, 43(6):1757-1766, 1997. Google Scholar
  34. Emanuele Viola. The sum of D small-bias generators fools polynomials of degree D. Computational Complexity, 18(2):209-217, 2009. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail