LIPIcs.CSL.2025.2.pdf
- Filesize: 380 kB
- 2 pages
The verification of infinite-state Markov chains is a challenging problem, even when those chains are described by structured high-level models. In 2007, Abdulla et al introduced the concept of decisiveness [Abdulla et al., 2007], and showed that a natural approximation scheme could be applied to infinite Markov chains that are decisive. This was, up to our knowledge, the unique generic scheme that could be widely applied to (decisive) infinite Markov chains providing guarantees on the computed values (under some mild assumptions for effectiveness). On the other hand, statistical model-checking is a very efficient method that can be used for estimating probabilities in stochastic systems [Younes and Simmons, 2006; H. L. S. Younes et al., 2010]. We explain in this talk that decisiveness is also a key concept that allows to apply such statistical methods to infinite Markov chains. While decisiveness is a crucial property, not all Markov chains are decisive, and it is therefore desirable to propose methods to analyze non-decisive Markov chains. Importance sampling [Kahn and Harris, 1951] is a method which has been proposed to improve efficiency of statistical model-checking, in particular for estimating probabilities of rare events in stochastic systems. The idea is to biase the original chain, and to estimate the probabilities in the biased chain; guarantees can sometimes be given, as studied for instance in [B. Barbot et al., 2012]. In this talk, we will explain how we use the importance sampling idea to turn a non-decisive Markov chain into a biased decisive Markov chain, in which we can estimate probabilities (with guarantees). We apply the general approach to a class of probabilistic pushdown automata. Our algorithms have been implemented in the tool Cosmos [P. Ballarini et al., 2015], and we discuss the methodology for experiments as well as our (partial) conclusions.
Feedback for Dagstuhl Publishing