A policy iteration algorithm for Markov decision processes skip-free in one direction

Authors Joke Lambert, Benny van Houdt, Chris Blondia



PDF
Thumbnail PDF

File

DagSemProc.07461.3.pdf
  • Filesize: 141 kB
  • 3 pages

Document Identifiers

Author Details

Joke Lambert
Benny van Houdt
Chris Blondia

Cite As Get BibTex

Joke Lambert, Benny van Houdt, and Chris Blondia. A policy iteration algorithm for Markov decision processes skip-free in one direction. In Numerical Methods for Structured Markov Chains. Dagstuhl Seminar Proceedings, Volume 7461, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008) https://doi.org/10.4230/DagSemProc.07461.3

Abstract

In this paper we present a new algorithm for policy iteration for Markov decision processes (MDP) skip-free in one direction.  This algorithm, which is based on matrix analytic methods, is in the same spirit as  the algorithm of White (Stochastic Models, 21:785-797, 2005) which was limited to matrices that are skip-free in both directions.

Optimization problems that can be solved using Markov decision processes arise in the domain of optical buffers, when trying to improve loss rates of fibre delay line (FDL) buffers.  Based on the analysis of such an FDL buffer we present a comparative study between the different techniques available to solve an MDP.  The results illustrate that the exploitation of the structure of the transition matrices places us in a position to deal with larger systems, while reducing the computation times.

Subject Classification

Keywords
  • Markov Decision Process
  • Policy Evaluation
  • Skip-Free
  • Optical buffers
  • Fibre Delay Lines

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail