Approximate Learning of Limit-Average Automata

Authors Jakub Michaliszyn , Jan Otop



PDF
Thumbnail PDF

File

LIPIcs.CONCUR.2019.17.pdf
  • Filesize: 0.51 MB
  • 16 pages

Document Identifiers

Author Details

Jakub Michaliszyn
  • University of Wrocław, Poland
Jan Otop
  • University of Wrocław, Poland

Cite As Get BibTex

Jakub Michaliszyn and Jan Otop. Approximate Learning of Limit-Average Automata. In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, pp. 17:1-17:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019) https://doi.org/10.4230/LIPIcs.CONCUR.2019.17

Abstract

Limit-average automata are weighted automata on infinite words that use average to aggregate the weights seen in infinite runs. We study approximate learning problems for limit-average automata in two settings: passive and active. In the passive learning case, we show that limit-average automata are not PAC-learnable as samples must be of exponential-size to provide (with good probability) enough details to learn an automaton. We also show that the problem of finding an automaton that fits a given sample is NP-complete. In the active learning case, we show that limit-average automata can be learned almost-exactly, i.e., we can learn in polynomial time an automaton that is consistent with the target automaton on almost all words. On the other hand, we show that the problem of learning an automaton that approximates the target automaton (with perhaps fewer states) is NP-complete. The abovementioned results are shown for the uniform distribution on words. We briefly discuss learning over different distributions.

Subject Classification

ACM Subject Classification
  • Theory of computation → Automata over infinite objects
  • Theory of computation → Quantitative automata
Keywords
  • weighted automata
  • learning
  • expected value

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Dana Angluin. Learning regular sets from queries and counterexamples. Information and computation, 75(2):87-106, 1987. Google Scholar
  2. Dana Angluin and Dongqu Chen. Learning a Random DFA from Uniform Strings and State Information. In ALT 2015, pages 119-133, Cham, 2015. Springer International Publishing. Google Scholar
  3. Dana Angluin and Dana Fisman. Learning regular omega languages. Theor. Comput. Sci., 650:57-72, 2016. Google Scholar
  4. Christel Baier and Joost-Pieter Katoen. Principles of model checking. MIT Press, 2008. Google Scholar
  5. Borja Balle and Mehryar Mohri. Learning Weighted Automata. In CAI 2015, pages 1-21, 2015. Google Scholar
  6. Borja Balle and Mehryar Mohri. On the Rademacher Complexity of Weighted Automata. In ALT 2015, pages 179-193, 2015. Google Scholar
  7. Borja Balle and Mehryar Mohri. Generalization bounds for learning weighted automata. Theor. Comput. Sci., 716:89-106, 2018. Google Scholar
  8. Borja Balle, Prakash Panangaden, and Doina Precup. A Canonical Form for Weighted Automata and Applications to Approximate Minimization. In LICS 2015, pages 701-712, 2015. Google Scholar
  9. Amos Beimel, Francesco Bergadano, Nader Bshouty, Eyal Kushilevitz, and Stefano Varricchio. Learning Functions Represented as Multiplicity Automata. Journal of the ACM, 47, October 1999. Google Scholar
  10. Michael Benedikt, Gabriele Puppis, and Cristian Riveros. Regular Repair of Specifications. In LICS 2011, pages 335-344, 2011. Google Scholar
  11. Udi Boker, Krishnendu Chatterjee, Thomas A. Henzinger, and Orna Kupferman. Temporal Specifications with Accumulative Values. ACM Trans. Comput. Log., 15(4):27:1-27:25, 2014. Google Scholar
  12. Benedikt Bollig, Peter Habermehl, Carsten Kern, and Martin Leucker. Angluin-Style Learning of NFA. In IJCAI 2009, pages 1004-1009, 2009. Google Scholar
  13. Patricia Bouyer, Nicolas Markey, and Raj Mohan Matteplackel. Averaging in LTL. In CONCUR 2014, pages 266-280, 2014. Google Scholar
  14. Pavol Cerný, Thomas A. Henzinger, and Arjun Radhakrishna. Quantitative abstraction refinement. In POPL 2013, pages 115-128, 2013. Google Scholar
  15. Krishnendu Chatterjee, Laurent Doyen, and Thomas A. Henzinger. Quantitative languages. ACM TOCL, 11(4):23, 2010. Google Scholar
  16. Krishnendu Chatterjee, Thomas A. Henzinger, and Jan Otop. Quantitative Automata under Probabilistic Semantics. In LICS 2016, pages 76-85, 2016. Google Scholar
  17. Colin de la Higuera. Grammatical Inference: Learning Automata and Grammars. Cambridge University Press, New York, NY, USA, 2010. Google Scholar
  18. Manfred Droste, Werner Kuich, and Heiko Vogler. Handbook of Weighted Automata. Springer, 1st edition, 2009. Google Scholar
  19. W. Feller. An introduction to probability theory and its applications. Wiley, 1971. Google Scholar
  20. Dana Fisman. Inferring regular languages and ω-languages. J. Log. Algebr. Meth. Program., 98:27-49, 2018. Google Scholar
  21. E Mark Gold. Complexity of automaton identification from given data. Information and control, 37(3):302-320, 1978. Google Scholar
  22. Amaury Habrard and José Oncina. Learning Multiplicity Tree Automata. In ICGI 2006, pages 268-280, 2006. Google Scholar
  23. Thomas A. Henzinger and Jan Otop. Model measuring for discrete and hybrid systems. Nonlinear Analysis: Hybrid Systems, 23:166-190, 2017. Google Scholar
  24. Jeffrey C Jackson. An efficient membership-query algorithm for learning DNF with respect to the uniform distribution. Journal of Computer and System Sciences, 55(3):414-440, 1997. Google Scholar
  25. Michael Kearns and Leslie Valiant. Cryptographic limitations on learning Boolean formulae and finite automata. Journal of the ACM (JACM), 41(1):67-95, 1994. Google Scholar
  26. Michael J Kearns, Umesh Virkumar Vazirani, and Umesh Vazirani. An introduction to computational learning theory. MIT Press, 1994. Google Scholar
  27. Kevin J. Lang. Random DFA’s Can Be Approximately Learned from Sparse Uniform Examples. In COLT 1992, pages 45-52, 1992. Google Scholar
  28. Ines Marusic and James Worrell. Complexity of equivalence and learning for multiplicity tree automata. Journal of Machine Learning Research, 16:2465-2500, 2015. Google Scholar
  29. Jakub Michaliszyn and Jan Otop. Non-deterministic Weighted Automata on Random Words. In CONCUR 2018, volume 118 of LIPIcs, pages 10:1-10:16, 2018. Google Scholar
  30. Pauli Miettinen, Taneli Mielikäinen, Aristides Gionis, Gautam Das, and Heikki Mannila. The discrete basis problem. In European Conference on Principles of Data Mining and Knowledge Discovery, pages 335-346. Springer, 2006. Google Scholar
  31. Joshua Moerman, Matteo Sammartino, Alexandra Silva, Bartek Klin, and Michal Szynwelski. Learning nominal automata. In POPL 2017, pages 613-625, 2017. Google Scholar
  32. José Oncina and Pedro Garcia. Inferring regular languages in polynomial updated time. In Pattern recognition and image analysis: selected papers from the IVth Spanish Symposium, pages 49-61. World Scientific, 1992. Google Scholar
  33. Leonard Pitt. Inductive inference, DFAs, and computational complexity. In International Workshop on Analogical and Inductive Inference, pages 18-44. Springer, 1989. Google Scholar
  34. Leonard Pitt and Manfred K Warmuth. The minimum consistent DFA problem cannot be approximated within any polynomial. Journal of the ACM (JACM), 40(1):95-142, 1993. Google Scholar
  35. Leslie G Valiant. A theory of the learnable. In Proceedings of the sixteenth annual ACM symposium on Theory of computing, pages 436-445. ACM, 1984. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail