Towards an Atlas of Computational Learning Theory
A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about trade-offs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power.
With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth.
In this paper we work toward this goal by providing three example maps, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.
computational learning
language learning
partially set-driven learning
strongly monotone learning
47:1-47:13
Regular Paper
Timo
Kötzing
Timo Kötzing
Martin
Schirneck
Martin Schirneck
10.4230/LIPIcs.STACS.2016.47
D. Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117-135, 1980.
G. Baliga, J. Case, W. Merkle, F. Stephan, and W. Wiehagen. When unlearning helps. Information and Computation, 206:694-709, 2008.
J. Bārzdiņš and K. Podnieks. The theory of inductive inference. In Mathematical Foundations of Computer Science, 1973.
L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125-155, 1975.
J. Case. The power of vacillation in language learning. SIAM Journal on Computing, 28(6):1941-1969, 1999.
J. Case and T. Kötzing. Strongly non-U-shaped learning results by general techniques. In Proc. of COLT (Conference on Learning Theory), pages 181-193, 2010.
J. Case and S. Moelius. Optimal language learning from positive data. Information and Computation, 209:1293-1311, 2011.
J. Case and C. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25:193-220, 1983.
M. Fulk. Prudence and other conditions on formal language learning. Information and Computation, 85:1-11, 1990.
E. Gold. Language identification in the limit. Information and Control, 10:447-474, 1967.
S. Jain, T. Kötzing, and F. Stephan. On the role of update constraints and text-types in iterative learning. In Proc. of ALT (Algorithmic Learning Theory), 2014.
S. Jain, D. Osherson, J. Royer, and A. Sharma. Systems that Learn: An Introduction to Learning Theory. MIT Press, Cambridge, MA, second edition, 1999.
K. Jantke. Monotonic and non-monotonic inductive inference of functions and patterns. In Proc. of Nonmonotonic and Inductive Logic, pages 161-177, 1991.
E. Kinber and F. Stephan. Language learning from texts: Mind changes, limited memory and monotonicity. Information and Computation, 123:224-241, 1995.
T. Kötzing. Abstraction and Complexity in Computational Learning in the Limit. PhD thesis, University of Delaware, 2009. Available online at http://pqdtopen.proquest.com/#viewpdf?dispub=3373055.
T. Kötzing. A solution to Wiehagen’s thesis. In Proc. of STACS (Symposium on Theoretical Aspects of Computer Science), pages 494-505, 2014.
T. Kötzing and R. Palenta. A map of update constraints in inductive inference. In Proc. of ALT (Algorithmic Learning Theory), 2014.
S. Lange and T. Zeugmann. Monotonic versus non-monotonic language learning. In Proc. of Nonmonotonic and Inductive Logic, pages 254-269, 1993.
D. Osherson, M. Stob, and S. Weinstein. Learning strategies. Information and Control, 53:32-51, 1982.
D. Osherson, M. Stob, and S. Weinstein. Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge, MA, 1986.
H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw Hill, New York City, NY, 1967. Reprinted by MIT Press, Cambridge, MA, 1987.
G. Schäfer-Richter. Über Eingabeabhängigkeit und Komplexität von Inferenzstrategien. PhD thesis, RWTH Aachen, 1984.
K. Wexler and P. Culicover. Formal Principles of Language Acquisition. MIT Press, Cambridge, MA, 1980.
R. Wiehagen. A thesis in inductive inference. In Proc. of Nonmonotonic and Inductive Logic, pages 184-207, 1991.
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode