Classification Among Hidden Markov Models

Authors S. Akshay, Hugo Bazille, Eric Fabre, Blaise Genest

Thumbnail PDF


  • Filesize: 0.5 MB
  • 14 pages

Document Identifiers

Author Details

S. Akshay
  • IIT Bombay, India
Hugo Bazille
  • Univ Rennes, Inria, IRISA, France
Eric Fabre
  • Univ Rennes, Inria, IRISA, France
Blaise Genest
  • Univ Rennes, CNRS, IRISA, France


We would like to thank Stefan Kiefer for his expert opinion, as well as anonymous reviewers for their constructive comments.

Cite AsGet BibTex

S. Akshay, Hugo Bazille, Eric Fabre, and Blaise Genest. Classification Among Hidden Markov Models. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 29:1-29:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


An important task in AI is one of classifying an observation as belonging to one class among several (e.g. image classification). We revisit this problem in a verification context: given k partially observable systems modeled as Hidden Markov Models (also called labeled Markov chains), and an execution of one of them, can we eventually classify which system performed this execution, just by looking at its observations? Interestingly, this problem generalizes several problems in verification and control, such as fault diagnosis and opacity. Also, classification has strong connections with different notions of distances between stochastic models. In this paper, we study a general and practical notion of classifiers, namely limit-sure classifiers, which allow misclassification, i.e. errors in classification, as long as the probability of misclassification tends to 0 as the length of the observation grows. To study the complexity of several notions of classification, we develop techniques based on a simple but powerful notion of stationary distributions for HMMs. We prove that one cannot classify among HMMs iff there is a finite separating word from their stationary distributions. This provides a direct proof that classifiability can be checked in PTIME, as an alternative to existing proofs using separating events (i.e. sets of infinite separating words) for the total variation distance. Our approach also allows us to introduce and tackle new notions of classifiability which are applicable in a security context.

Subject Classification

ACM Subject Classification
  • Theory of computation
  • verification: probabilistic systems
  • partially observable systems


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. S. Akshay, Hugo Bazille, Eric Fabre, and Blaise Genest. Classification among Hidden Markov Models, 2019. URL:
  2. Christel Baier and Joost-Pieter Katoen. Principles of model checking. MIT Press, Cambridge, MA, 2008. Google Scholar
  3. Vijay Balasubramanian. Equivalence and Reduction of Hidden Markov Models, 1993. Google Scholar
  4. Nathalie Bertrand, Serge Haddad, and Engel Lefaucheux. Foundation of diagnosis and predictability in probabilistic systems. In 34th International Conference on Foundation of Software Technology and Theoretical Computer Science, volume 29 of LIPIcs. Leibniz Int. Proc. Inform., pages 417-429. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2014. Google Scholar
  5. Nathalie Bertrand, Serge Haddad, and Engel Lefaucheux. Accurate approximate diagnosability of stochastic systems. In Language and automata theory and applications, volume 9618 of Lecture Notes in Comput. Sci., pages 549-561. Springer, 2016. URL:
  6. Taolue Chen and Stefan Kiefer. On the total variation distance of labelled Markov chains. In Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), pages Article No. 33, 10. ACM, New York, 2014. Google Scholar
  7. Laurent Doyen, Thomas A. Henzinger, and Jean-François Raskin. Equivalence of labeled Markov chains. Internat. J. Found. Comput. Sci., 19(3):549-563, 2008. URL:
  8. Hugo Gimbert and Youssouf Oualhadj. Probabilistic automata on finite words: decidable and undecidable problems. In Automata, languages and programming. Part II, volume 6199 of Lecture Notes in Comput. Sci., pages 527-538. Springer, Berlin, 2010. URL:
  9. John G. Kemeny and J. Laurie Snell. Finite Markov chains. The University Series in Undergraduate Mathematics. D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto-London-New York, 1960. Google Scholar
  10. Christoforos Keroglou and Christoforos N. Hadjicostis. Probabilistic system opacity in discrete event systems. Discrete Event Dyn. Syst., 28(2):289-314, 2018. URL:
  11. Stefan Kiefer and A. Prasad Sistla. Distinguishing hidden Markov chains. In Proceedings of the 31st Annual ACM-IEEE Symposium on Logic in Computer Science (LICS 2016), page 10. ACM, New York, 2016. Google Scholar
  12. Daniel Ramage. Hidden Markov Models Fundamentals, CS229, lecture notes, 2007. Google Scholar
  13. Anooshiravan Saboori and Christoforos N. Hadjicostis. Probabilistic current-state opacity is undecidable. In Proceedings of the 19th Intl. Symposium on Mathematical Theory of Networks and Systems, pages 1-10, 2010. Google Scholar
  14. Meera Sampath, Raja Sengupta, Stéphane Lafortune, Kasim Sinnamohideen, and Demosthenis Teneketzis. Failure diagnosis using discrete-event models. IEEE Trans. Contr. Sys. Techn., 4(2):105-124, 1996. URL:
  15. David Thorsley and Demosthenis Teneketzis. Diagnosability of stochastic discrete-event systems. IEEE Trans. Automat. Control, 50(4):476-492, 2005. URL:
  16. Wen-Guey Tzeng. The Equivalence and Learning of Probabilistic Automata (Extended Abstract). In 30th Annual Symposium on Foundations of Computer Science, Research Triangle Park, North Carolina, USA, 30 October - 1 November 1989, IEEE FOCS'89, pages 268-273, 1989. URL:
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail