Towards an Atlas of Computational Learning Theory

Authors Timo Kötzing, Martin Schirneck



PDF
Thumbnail PDF

File

LIPIcs.STACS.2016.47.pdf
  • Filesize: 0.59 MB
  • 13 pages

Document Identifiers

Author Details

Timo Kötzing
Martin Schirneck

Cite As Get BibTex

Timo Kötzing and Martin Schirneck. Towards an Atlas of Computational Learning Theory. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 47:1-47:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016) https://doi.org/10.4230/LIPIcs.STACS.2016.47

Abstract

A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about trade-offs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power.

With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth.

In this paper we work toward this goal by providing three example maps, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.

Subject Classification

Keywords
  • computational learning
  • language learning
  • partially set-driven learning
  • strongly monotone learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. D. Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117-135, 1980. Google Scholar
  2. G. Baliga, J. Case, W. Merkle, F. Stephan, and W. Wiehagen. When unlearning helps. Information and Computation, 206:694-709, 2008. Google Scholar
  3. J. Bārzdiņš and K. Podnieks. The theory of inductive inference. In Mathematical Foundations of Computer Science, 1973. Google Scholar
  4. L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125-155, 1975. Google Scholar
  5. J. Case. The power of vacillation in language learning. SIAM Journal on Computing, 28(6):1941-1969, 1999. Google Scholar
  6. J. Case and T. Kötzing. Strongly non-U-shaped learning results by general techniques. In Proc. of COLT (Conference on Learning Theory), pages 181-193, 2010. Google Scholar
  7. J. Case and S. Moelius. Optimal language learning from positive data. Information and Computation, 209:1293-1311, 2011. Google Scholar
  8. J. Case and C. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25:193-220, 1983. Google Scholar
  9. M. Fulk. Prudence and other conditions on formal language learning. Information and Computation, 85:1-11, 1990. Google Scholar
  10. E. Gold. Language identification in the limit. Information and Control, 10:447-474, 1967. Google Scholar
  11. S. Jain, T. Kötzing, and F. Stephan. On the role of update constraints and text-types in iterative learning. In Proc. of ALT (Algorithmic Learning Theory), 2014. Google Scholar
  12. S. Jain, D. Osherson, J. Royer, and A. Sharma. Systems that Learn: An Introduction to Learning Theory. MIT Press, Cambridge, MA, second edition, 1999. Google Scholar
  13. K. Jantke. Monotonic and non-monotonic inductive inference of functions and patterns. In Proc. of Nonmonotonic and Inductive Logic, pages 161-177, 1991. Google Scholar
  14. E. Kinber and F. Stephan. Language learning from texts: Mind changes, limited memory and monotonicity. Information and Computation, 123:224-241, 1995. Google Scholar
  15. T. Kötzing. Abstraction and Complexity in Computational Learning in the Limit. PhD thesis, University of Delaware, 2009. Available online at http://pqdtopen.proquest.com/#viewpdf?dispub=3373055. Google Scholar
  16. T. Kötzing. A solution to Wiehagen’s thesis. In Proc. of STACS (Symposium on Theoretical Aspects of Computer Science), pages 494-505, 2014. Google Scholar
  17. T. Kötzing and R. Palenta. A map of update constraints in inductive inference. In Proc. of ALT (Algorithmic Learning Theory), 2014. Google Scholar
  18. S. Lange and T. Zeugmann. Monotonic versus non-monotonic language learning. In Proc. of Nonmonotonic and Inductive Logic, pages 254-269, 1993. Google Scholar
  19. D. Osherson, M. Stob, and S. Weinstein. Learning strategies. Information and Control, 53:32-51, 1982. Google Scholar
  20. D. Osherson, M. Stob, and S. Weinstein. Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge, MA, 1986. Google Scholar
  21. H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw Hill, New York City, NY, 1967. Reprinted by MIT Press, Cambridge, MA, 1987. Google Scholar
  22. G. Schäfer-Richter. Über Eingabeabhängigkeit und Komplexität von Inferenzstrategien. PhD thesis, RWTH Aachen, 1984. Google Scholar
  23. K. Wexler and P. Culicover. Formal Principles of Language Acquisition. MIT Press, Cambridge, MA, 1980. Google Scholar
  24. R. Wiehagen. A thesis in inductive inference. In Proc. of Nonmonotonic and Inductive Logic, pages 184-207, 1991. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail