Conditional Sparse Linear Regression

Author Brendan Juba



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2017.45.pdf
  • Filesize: 456 kB
  • 14 pages

Document Identifiers

Author Details

Brendan Juba

Cite As Get BibTex

Brendan Juba. Conditional Sparse Linear Regression. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017) https://doi.org/10.4230/LIPIcs.ITCS.2017.45

Abstract

Machine learning and statistics typically focus on building models that capture the vast majority of the data, possibly ignoring a small subset of data as "noise" or "outliers." By contrast, here we consider the problem of jointly identifying a significant (but perhaps small) segment of a population in which there is a highly sparse linear regression fit, together with the coefficients for the linear fit. We contend that such tasks are of interest both because the models themselves may be able to achieve better predictions in such special cases, but also because they may aid our understanding of the data. We give algorithms for such problems under the sup norm, when this unknown segment of the population is described by a k-DNF condition and the regression fit is s-sparse for constant k and s. For the variants of this problem when the regression fit is not so sparse or using expected error, we also give a preliminary algorithm and highlight the question as a challenge for future work.

Subject Classification

Keywords
  • linear regression
  • conditional regression
  • conditional distribution search

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Rakesh Agrawal, Heikki Mannila, Ramakrishnan Srikant, Hannu Toivonen, and A. Inkeri Verkamo. Fast discovery of association rules. In Advances in Knowledge Discovery and Data Mining, chapter 12, pages 307-328. MIT Press, Cambridge, MA, 1996. Google Scholar
  2. Martin Anthony and Norman Biggs. Computational Learning Theory. Number 30 in Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, New York, NY, 1992. Google Scholar
  3. Pranjal Awasthi, Avrim Blum, and Or Sheffet. Improved guarantees for agnostic learning of disjunctions. In Proc. 23rd COLT, pages 359-367, 2010. Google Scholar
  4. Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. J. ACM, 36(4):929-965, 1989. URL: http://dx.doi.org/10.1145/76359.76371.
  5. Nader H. Bshouty and Lynn Burroughs. Maximizing agreements with one-sided error with applications to heuristic learning. Machine Learning, 59(1-2):99-123, 2005. URL: http://dx.doi.org/10.1007/s10994-005-0464-5.
  6. Moses Charikar, Jacob Steinhardt, and Gregory Valiant. Learning with untrusted data. arXiv:1611.02315, 2016. Google Scholar
  7. Amit Daniely. Complexity theoretic limitations on learning halfspaces. In Proc. 48th STOC, pages 105-117, 2016. URL: http://dx.doi.org/10.1145/2897518.2897520.
  8. Amit Daniely and Shai Shalev-Shwartz. Complexity theoretic limtations on learning DNF’s. In Proc. 29th COLT, volume 49 of JMLR Workshops and Conference Proceedings, pages 815-830, 2016. Google Scholar
  9. Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381-395, 1981. URL: http://dx.doi.org/10.1145/358669.358692.
  10. Jerome H. Friedman and Nicholas I. Fisher. Bump hunting in high-dimensional data. Statistics and Computing, 9(2):123-143, 1999. URL: http://dx.doi.org/10.1023/A:1008894516817.
  11. Steve Hanneke. The optimal sample complexity of PAC learning. JMLR, 17(38):1-15, 2016. Google Scholar
  12. Moritz Hardt and Ankur Moitra. Algorithms and hardness for robust subspace recovery. In Proc. 26th COLT, volume 30 of JMLR Workshops and Conference Proceedings, pages 354-375, 2013. Google Scholar
  13. Peter J. Huber. Robust Statistics. John Wiley &Sons, New York, NY, 1981. Google Scholar
  14. Jiming Jiang. Linear and Generalized Linear Mixed Models and Their Applications. Springer, Berlin, 2007. Google Scholar
  15. Brendan Juba. Learning abductive reasoning using random examples. In Proc. 30th AAAI, pages 999-1007, 2016. Google Scholar
  16. Adam Tauman Kalai, Varun Kanade, and Yishay Mansour. Reliable agnostic learning. JCSS, 78:1481-1495, 2012. URL: http://dx.doi.org/10.1016/j.jcss.2011.12.026.
  17. Varun Kanade and Justin Thaler. Distribution-independent reliable learning. In Proc. 27th COLT, volume 35 of JMLR Workshops and Conference Proceedings, pages 3-24, 2014. Google Scholar
  18. Charles E. McCulloch and Shayle R. Searle. Generalized, Linear, and Mixed Models. John Wiley &Sons, New York, NY, 2001. Google Scholar
  19. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press, Cambridge, MA, 2012. Google Scholar
  20. B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Comput., 24(2):227-234, 1995. URL: http://dx.doi.org/10.1137/S0097539792240406.
  21. Leonard Pitt and Leslie G. Valiant. Computational limitations on learning from examples. J. ACM, 35(4):965-984, 1988. URL: http://dx.doi.org/10.1145/48014.63140.
  22. Avi Rosenfeld, David G. Graham, Rifat Hamoudi, Rommell Butawan, Victor Eneh, Saif Kahn, Haroon Miah, Mahesan Niranjan, and Laurence B. Lovat. MIAT: A novel attribute selection approach to better predict upper gastrointestinal cancer. In Proc. IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 1-7, 2015. URL: http://dx.doi.org/10.1109/DSAA.2015.7344866.
  23. Peter J. Rousseeuw and Annick M. Leroy. Robust Regression and Outlier Detection. John Wiley &Sons, New York, NY, 1987. Google Scholar
  24. Alexander Schrijver. Theory of Linear and Integer Programming. Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley &Sons, 1986. Google Scholar
  25. Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York, NY, 2014. Google Scholar
  26. Madhu Sudan. List decoding: Algorithms and applications. In J. van Leeuwen, O. Watanabe, M. Hagiya, P.D. Mosses, and T. Ito, editors, IFIP International Converence on Theoretical Computer Science, volume 1872 of LNCS, pages 25-41. Springer, 2000. URL: http://dx.doi.org/10.1007/3-540-44929-9_3.
  27. Robert Tibshirani. Regression shrinkage and selection via the lasso. J. Royal Statist. Soc. B, 58(1):267-288, 1996. Google Scholar
  28. Vladimir Vapnik. Estimation of Dependencies Based on Empirical Data. Springer, New York, NY, 1982. Google Scholar
  29. Mengxue Zhang, Tushar Mathew, and Brendan Juba. An improved algorithm for learning to perform exception-tolerant abduction. To appear in 31st AAAI, 2017. Google Scholar
  30. Yuchen Zhang, Martin J. Wainwright, and Michael I. Jordan. Lower bounds on the performance of polynomial-time algorithms for sparse linear regression. In Proc. 27th COLT, volume 35 of JMLR Workshops and Conference Proceedings, pages 921-948, 2014. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail