Multiple-Environment Markov Decision Processes

Authors Jean-Francois Raskin, Ocan Sankur

Thumbnail PDF


  • Filesize: 0.5 MB
  • 13 pages

Document Identifiers

Author Details

Jean-Francois Raskin
Ocan Sankur

Cite AsGet BibTex

Jean-Francois Raskin and Ocan Sankur. Multiple-Environment Markov Decision Processes. In 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 29, pp. 531-543, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


We introduce Multi-Environment Markov Decision Processes (MEMDPs) which are MDPs with a set of probabilistic transition functions. The goal in an MEMDP is to synthesize a single controller strategy with guaranteed performances against all environments even though the environment is unknown a priori. While MEMDPs can be seen as a special class of partially observable MDPs, we show that several verification problems that are undecidable for partially observable MDPs, are decidable for MEMDPs and sometimes have even efficient solutions.
  • Markov decision processes
  • probabilistic systems
  • multiple objectives


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Christel Baier and Joost-Pieter Katoen. Principles of model checking. MIT Press, 2008. Google Scholar
  2. John Canny. Some algebraic and geometric computations in pspace. In STOC'88, pages 460-467, New York, NY, USA, 1988. ACM. Google Scholar
  3. Krishnendu Chatterjee, Martin Chmelik, and Mathieu Tracol. What is decidable about partially observable markov decision processes with omega-regular objectives. In CSL, volume 23 of LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2013. Google Scholar
  4. Taolue Chen, Tingting Han, and Marta Z. Kwiatkowska. On the complexity of model checking interval-valued discrete time markov chains. Inf. Process. Lett., 113(7):210-216, 2013. Google Scholar
  5. Costas Courcoubetis and Mihalis Yannakakis. The complexity of probabilistic verification. J. ACM, 42(4):857-907, July 1995. Google Scholar
  6. Luca de Alfaro. Formal verification of probabilistic systems. Ph.D. thesis, Stanford University, 1997. Google Scholar
  7. Kousha Etessami, Marta Z. Kwiatkowska, Moshe Y. Vardi, and Mihalis Yannakakis. Multi-objective model checking of Markov decision processes. Logical Methods in Computer Science, 4(4), 2008. Google Scholar
  8. Shimon Even, Alan L. Selman, and Yacov Yacobi. The complexity of promise problems with applications to public-key cryptography. Information and Control, 61(2):159-173, 1984. Google Scholar
  9. Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Pac bounds for multi-armed bandit and markov decision processes. In COLT'02, volume 2375 of LNCS, pages 255-270. Springer, 2002. Google Scholar
  10. Vojtěch Forejt, Marta Kwiatkowska, Gethin Norman, David Parker, and Hongyang Qu. Quantitative multi-objective verification for probabilistic systems. In TACAS'11, volume 6605 of LNCS, pages 112-127. Springer, 2011. Google Scholar
  11. Oded Goldreich. On promise problems (a survey in memory of Shimon Even [1935-2004]). Manuscript, 2005. Google Scholar
  12. Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237-285, 1996. Google Scholar
  13. Igor O. Kozine and Lev V. Utkin. Interval-valued finite markov chains. Reliable computing, 8(2):97-113, 2002. Google Scholar
  14. Antonín Kučera and Oldřich Stražovský. On the controller synthesis for finite-state markov decision processes. In FSTTCS 2005, volume 3821 of LNCS, pages 541-552. Springer, 2005. Google Scholar
  15. Shie Mannor and John N. Tsitsiklis. The sample complexity of exploration in the multi-armed bandit problem. J. Mach. Learn. Res., 5:623-648, December 2004. Google Scholar
  16. C. T. Ng, M. S. Barketau, T. C. E. Cheng, and Mikhail Y. Kovalyov. "Product Partition" and related problems of scheduling and systems reliability: Computational complexity and approximation. European Journal of Operational Research, 207(2):601-604, 2010. Google Scholar
  17. Arnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertain transition matrices. Operations Research, 53(5):780-798, 2005. Google Scholar
  18. Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994. Google Scholar
  19. Jean-François Raskin and Ocan Sankur. Multiple-environment markov decision processes. CoRR, abs/1405.4733, 2014. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail