Consistency Checking Problems: A Gateway to Parameterized Sample Complexity

Authors Robert Ganian , Liana Khazaliya , Kirill Simonov



PDF
Thumbnail PDF

File

LIPIcs.IPEC.2023.18.pdf
  • Filesize: 1.05 MB
  • 17 pages

Document Identifiers

Author Details

Robert Ganian
  • Technische Universität Wien, Austria
Liana Khazaliya
  • Technische Universität Wien, Austria
Kirill Simonov
  • Hasso Plattner Institute, Universität Potsdam, Germany

Cite AsGet BibTex

Robert Ganian, Liana Khazaliya, and Kirill Simonov. Consistency Checking Problems: A Gateway to Parameterized Sample Complexity. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 18:1-18:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)
https://doi.org/10.4230/LIPIcs.IPEC.2023.18

Abstract

Recently, Brand, Ganian and Simonov introduced a parameterized refinement of the classical PAC-learning sample complexity framework. A crucial outcome of their investigation is that for a very wide range of learning problems, there is a direct and provable correspondence between fixed-parameter PAC-learnability (in the sample complexity setting) and the fixed-parameter tractability of a corresponding "consistency checking" search problem (in the setting of computational complexity). The latter can be seen as generalizations of classical search problems where instead of receiving a single instance, one receives multiple yes- and no-examples and is tasked with finding a solution which is consistent with the provided examples. Apart from a few initial results, consistency checking problems are almost entirely unexplored from a parameterized complexity perspective. In this article, we provide an overview of these problems and their connection to parameterized sample complexity, with the primary aim of facilitating further research in this direction. Afterwards, we establish the fixed-parameter (in)-tractability for some of the arguably most natural consistency checking problems on graphs, and show that their complexity-theoretic behavior is surprisingly very different from that of classical decision problems. Our new results cover consistency checking variants of problems as diverse as (k-)Path, Matching, 2-Coloring, Independent Set and Dominating Set, among others.

Subject Classification

ACM Subject Classification
  • Theory of computation → Parameterized complexity and exact algorithms
Keywords
  • consistency checking
  • sample complexity
  • fixed-parameter tractability

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Sushant Agarwal, Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner, and Ruth Urner. Open problem: Are all VC-classes CPAC learnable? In Mikhail Belkin and Samory Kpotufe, editors, Conference on Learning Theory, COLT 2021, 15-19 August 2021, Boulder, Colorado, USA, volume 134 of Proceedings of Machine Learning Research, pages 4636-4641. PMLR, 2021. URL: http://proceedings.mlr.press/v134/open-problem-agarwal21b.html.
  2. Michael Alekhnovich, Mark Braverman, Vitaly Feldman, Adam R. Klivans, and Toniann Pitassi. The complexity of properly learning simple concept classes. J. Comput. Syst. Sci., 74(1):16-34, 2008. URL: https://doi.org/10.1016/j.jcss.2007.04.011.
  3. Vikraman Arvind, Johannes Köbler, and Wolfgang Lindner. Parameterized learnability of juntas. Theor. Comput. Sci., 410(47-49):4928-4936, 2009. URL: https://doi.org/10.1016/j.tcs.2009.07.003.
  4. Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. J. ACM, 36(4):929-965, 1989. URL: https://doi.org/10.1145/76359.76371.
  5. Olivier Bousquet, Steve Hanneke, Shay Moran, and Nikita Zhivotovskiy. Proper learning, helly number, and an optimal SVM bound. In Jacob D. Abernethy and Shivani Agarwal, editors, Conference on Learning Theory, COLT 2020, 9-12 July 2020, Virtual Event [Graz, Austria], volume 125 of Proceedings of Machine Learning Research, pages 582-609. PMLR, 2020. URL: http://proceedings.mlr.press/v125/bousquet20a.html.
  6. Cornelius Brand, Robert Ganian, and Kirill Simonov. A parameterized theory of PAC learning. In Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023. AAAI Press, 2023. to appear. URL: https://arxiv.org/abs/2304.14058.
  7. M. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Parameterized Algorithms. Springer, 2015. Google Scholar
  8. Peter Damaschke and Azam Sheikh Muhammad. Competitive group testing and learning hidden vertex covers with minimum adaptivity. Discret. Math. Algorithms Appl., 2(3):291-312, 2010. URL: https://doi.org/10.1142/S179383091000067X.
  9. Reinhard Diestel. Graph Theory. Graduate Texts in Mathematics. Springer, Berlin, Heidelberg, 5th edition, 2017. URL: https://doi.org/10.1007/978-3-662-53622-3.
  10. Eduard Eiben, Sebastian Ordyniak, Giacomo Paesani, and Stefan Szeider. Learning small decision trees with large domain. In The 32nd International Joint Conference on Artificial Intelligence (IJCAI-23), August 19-25, 2023, Macao, S.A.R. International Joint Conferences on Artificial Intelligence Organization, 2023. to appear. Google Scholar
  11. Peter L. Hammer and Bruno Simeone. The splittance of a graph. Comb., 1(3):275-284, 1981. URL: https://doi.org/10.1007/BF02579333.
  12. Steve Hanneke. The optimal sample complexity of PAC learning. J. Mach. Learn. Res., 17:38:1-38:15, 2016. URL: http://jmlr.org/papers/v17/15-389.html.
  13. M. Kearns and U. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994. Google Scholar
  14. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. Adaptive computation and machine learning. MIT Press, 2012. URL: http://mitpress.mit.edu/books/foundations-machine-learning-0.
  15. Elchanan Mossel, Ryan O'Donnell, and Rocco A. Servedio. Learning juntas. In Lawrence L. Larmore and Michel X. Goemans, editors, Proceedings of the 35th Annual ACM Symposium on Theory of Computing, June 9-11, 2003, San Diego, CA, USA, pages 206-212. ACM, 2003. URL: https://doi.org/10.1145/780542.780574.
  16. Sebastian Ordyniak and Stefan Szeider. Parameterized complexity of small decision tree learning. In Proceeding of AAAI-21, the Thirty-Fifth AAAI Conference on Artificial Intelligence, pages 6454-6462. AAAI Press, 2021. URL: https://ojs.aaai.org/index.php/AAAI/article/view/16800.
  17. Leonard Pitt and Leslie G. Valiant. Computational limitations on learning from examples. J. ACM, 35(4):965-984, 1988. URL: https://doi.org/10.1145/48014.63140.
  18. Oleg Sheyner and Jeannette M. Wing. Tools for generating and analyzing attack graphs. In FMCO, volume 3188 of Lecture Notes in Computer Science, pages 344-372. Springer, 2003. URL: https://www.cs.cmu.edu/~scenariograph/sheynerwing04.pdf.
  19. L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, 1984. URL: https://doi.org/10.1145/1968.1972.
  20. Steffen van Bergerem, Martin Grohe, and Martin Ritzert. On the parameterized complexity of learning first-order logic. In Proceedings of the 41st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS '22, pages 337-346, New York, NY, USA, 2022. Association for Computing Machinery. URL: https://doi.org/10.1145/3517804.3524151.
  21. Jeannette M. Wing. Attack graph generation and analysis. In Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, New York, NY, USA, 2006. Association for Computing Machinery. URL: https://doi.org/10.1145/1128817.1128822.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail