Algorithms and Hardness Results for Computing Cores of Markov Chains

Authors Ali Ahmadi , Krishnendu Chatterjee , Amir Kafshdar Goharshady , Tobias Meggendorfer , Roodabeh Safavi , Ðorđe Žikelić



PDF
Thumbnail PDF

File

LIPIcs.FSTTCS.2022.29.pdf
  • Filesize: 0.83 MB
  • 20 pages

Document Identifiers

Author Details

Ali Ahmadi
  • Hong Kong University of Science and Technology (HKUST), China
Krishnendu Chatterjee
  • Institute of Science and Technology Austria (ISTA), Klosterneuburg, Austria
Amir Kafshdar Goharshady
  • Hong Kong University of Science and Technology (HKUST), China
Tobias Meggendorfer
  • Institute of Science and Technology Austria (ISTA), Klosterneuburg, Austria
Roodabeh Safavi
  • Institute of Science and Technology Austria (ISTA), Klosterneuburg, Austria
  • Hong Kong University of Science and Technology (HKUST), China
Ðorđe Žikelić
  • Institute of Science and Technology Austria (ISTA), Klosterneuburg, Austria

Cite AsGet BibTex

Ali Ahmadi, Krishnendu Chatterjee, Amir Kafshdar Goharshady, Tobias Meggendorfer, Roodabeh Safavi, and Ðorđe Žikelić. Algorithms and Hardness Results for Computing Cores of Markov Chains. In 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 250, pp. 29:1-29:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)
https://doi.org/10.4230/LIPIcs.FSTTCS.2022.29

Abstract

Given a Markov chain M = (V, v_0, δ), with state space V and a starting state v_0, and a probability threshold ε, an ε-core is a subset C of states that is left with probability at most ε. More formally, C ⊆ V is an ε-core, iff ℙ[reach (V\C)] ≤ ε. Cores have been applied in a wide variety of verification problems over Markov chains, Markov decision processes, and probabilistic programs, as a means of discarding uninteresting and low-probability parts of a probabilistic system and instead being able to focus on the states that are likely to be encountered in a real-world run. In this work, we focus on the problem of computing a minimal ε-core in a Markov chain. Our contributions include both negative and positive results: (i) We show that the decision problem on the existence of an ε-core of a given size is NP-complete. This solves an open problem posed in [Jan Kretínský and Tobias Meggendorfer, 2020]. We additionally show that the problem remains NP-complete even when limited to acyclic Markov chains with bounded maximal vertex degree; (ii) We provide a polynomial time algorithm for computing a minimal ε-core on Markov chains over control-flow graphs of structured programs. A straightforward combination of our algorithm with standard branch prediction techniques allows one to apply the idea of cores to find a subset of program lines that are left with low probability and then focus any desired static analysis on this core subset.

Subject Classification

ACM Subject Classification
  • Software and its engineering → Formal software verification
  • Theory of computation → Graph algorithms analysis
Keywords
  • Markov Chains
  • Cores
  • Complexity

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Ali Asadi, Krishnendu Chatterjee, Amir Kafshdar Goharshady, Kiarash Mohammadi, and Andreas Pavlogiannis. Faster algorithms for quantitative analysis of MCs and MDPs with small treewidth. In ATVA, pages 253-270, 2020. Google Scholar
  2. Pranav Ashok, Yuliya Butkova, Holger Hermanns, and Jan Kretínský. Continuous-time Markov decisions based on partial exploration. In ATVA, pages 317-334, 2018. Google Scholar
  3. Pranav Ashok, Krishnendu Chatterjee, Przemyslaw Daca, Jan Kretínský, and Tobias Meggendorfer. Value iteration for long-run average reward in Markov decision processes. In CAV, pages 201-221, 2017. Google Scholar
  4. Christel Baier and Joost-Pieter Katoen. Principles of model checking. MIT Press, 2008. Google Scholar
  5. Kevin Batz, Benjamin Lucien Kaminski, Joost-Pieter Katoen, and Christoph Matheja. Relatively complete verification of probabilistic programs: an expressive language for expectation-based reasoning. In POPL, 2021. Google Scholar
  6. Tomás Brázdil, Krishnendu Chatterjee, Martin Chmelik, Vojtech Forejt, Jan Kretínský, Marta Z. Kwiatkowska, David Parker, and Mateusz Ujma. Verification of Markov decision processes using learning algorithms. In ATVA, pages 98-114, 2014. Google Scholar
  7. Krishnendu Chatterjee, Hongfei Fu, and Amir Kafshdar Goharshady. Termination analysis of probabilistic programs through positivstellensatz’s. In CAV, pages 3-22, 2016. Google Scholar
  8. Krishnendu Chatterjee, Hongfei Fu, and Amir Kafshdar Goharshady. Non-polynomial worst-case analysis of recursive programs. In CAV, pages 41-63, 2017. Google Scholar
  9. Krishnendu Chatterjee, Hongfei Fu, and Amir Kafshdar Goharshady. Non-polynomial worst-case analysis of recursive programs. ACM Trans. Program. Lang. Syst., 41(4):20:1-20:52, 2019. Google Scholar
  10. Krishnendu Chatterjee, Hongfei Fu, Amir Kafshdar Goharshady, and Nastaran Okati. Computational approaches for stochastic shortest path on succinct MDPs. In IJCAI, pages 4700-4707, 2018. Google Scholar
  11. Krishnendu Chatterjee, Amir Kafshdar Goharshady, Rasmus Ibsen-Jensen, and Yaron Velner. Ergodic mean-payoff games for the analysis of attacks in crypto-currencies. In CONCUR, pages 11:1-11:17, 2018. Google Scholar
  12. Krishnendu Chatterjee, Amir Kafshdar Goharshady, Tobias Meggendorfer, and Ðorđe Žikelić. Sound and complete certificates for quantitative termination analysis of probabilistic programs. In CAV, 2022. Google Scholar
  13. Krishnendu Chatterjee, Amir Kafshdar Goharshady, and Arash Pourdamghani. Probabilistic smart contracts: Secure randomness on the blockchain. In ICBC, pages 403-412, 2019. Google Scholar
  14. Krishnendu Chatterjee, Amir Kafshdar Goharshady, and Yaron Velner. Quantitative analysis of smart contracts. In ESOP, volume 10801, pages 739-767, 2018. Google Scholar
  15. Krishnendu Chatterjee, Ehsan Kafshdar Goharshady, Petr Novotný, Jiri Zárevúcky, and Ðorđe Žikelić. On lexicographic proof rules for probabilistic termination. In FM, volume 13047 of Lecture Notes in Computer Science, pages 619-639. Springer, 2021. Google Scholar
  16. Krishnendu Chatterjee, Petr Novotný, and Ðorđe Žikelić. Stochastic invariants for probabilistic termination. In POPL, pages 145-160, 2017. Google Scholar
  17. Christian Dehnert, Sebastian Junges, Joost-Pieter Katoen, and Matthias Volk. A storm is coming: A modern probabilistic model checker. In CAV, pages 592-600, 2017. Google Scholar
  18. Devdatt P. Dubhashi and Alessandro Panconesi. Concentration of Measure for the Analysis of Randomized Algorithms. Cambridge University Press, 2009. Google Scholar
  19. Paul A Gagniuc. Markov chains: from theory to implementation and experimentation. Wiley, 2017. Google Scholar
  20. Daniel T Gillespie. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. Journal of computational physics, 22(4):403-434, 1976. Google Scholar
  21. S. Gómez, A. Arenas, J. Borge-Holthoefer, S. Meloni, and Y. Moreno. Discrete-time Markov chain approach to contact-based disease spreading in complex networks. EPL, 89(3), 2010. Google Scholar
  22. Mingzhang Huang, Hongfei Fu, Krishnendu Chatterjee, and Amir Kafshdar Goharshady. Modular verification for almost-sure termination of probabilistic programs. In OOPSLA, pages 129:1-129:29, 2019. Google Scholar
  23. Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in partially observable stochastic domains. Artif. Intell., 101(1-2):99-134, 1998. Google Scholar
  24. Benjamin Lucien Kaminski, Joost-Pieter Katoen, and Christoph Matheja. On the hardness of analyzing probabilistic programs. Acta Informatica, 56(3):255-285, 2019. Google Scholar
  25. Hadas Kress-Gazit, Georgios E. Fainekos, and George J. Pappas. Temporal-logic-based reactive mission and motion planning. IEEE Trans. Robotics, 25(6):1370-1381, 2009. Google Scholar
  26. Jan Kretínský and Tobias Meggendorfer. Of cores: A partial-exploration framework for Markov decision processes. Log. Methods Comput. Sci., 16(4), 2020. Google Scholar
  27. Marta Z. Kwiatkowska, Gethin Norman, and David Parker. PRISM 4.0: Verification of probabilistic real-time systems. In CAV, pages 585-591, 2011. Google Scholar
  28. Annabelle McIver and Carroll Morgan. Abstraction, Refinement and Proof for Probabilistic Systems. Monographs in Computer Science. Springer, 2005. Google Scholar
  29. Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms. CRC Press, 1999. Google Scholar
  30. Johan Paulsson. Summing up the noise in gene networks. Nature, 427(6973):415-418, 2004. Google Scholar
  31. Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, 1994. Google Scholar
  32. Henry Gordon Rice. Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical society, 74(2):358-366, 1953. Google Scholar
  33. James E. Smith. A study of branch prediction strategies. In ISCA, pages 202-215, 1998. Google Scholar
  34. Jana Wagemaker, Nate Foster, Tobias Kappé, Dexter Kozen, Jurriaan Rot, and Alexandra Silva. Concurrent NetKAT - modeling and analyzing stateful, concurrent networks. In ESOP, pages 575-602, 2022. Google Scholar
  35. Jinyi Wang, Yican Sun, Hongfei Fu, Krishnendu Chatterjee, and Amir Kafshdar Goharshady. Quantitative analysis of assertion violations in probabilistic programs. In PLDI, pages 1171-1186. ACM, 2021. Google Scholar
  36. Peixin Wang, Hongfei Fu, Amir Kafshdar Goharshady, Krishnendu Chatterjee, Xudong Qin, and Wenjun Shi. Cost analysis of nondeterministic probabilistic programs. In PLDI, pages 204-220. ACM, 2019. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail