Kretínský, Jan ;
Pérez, Guillermo A. ;
Raskin, JeanFrançois
LearningBased MeanPayoff Optimization in an Unknown MDP under OmegaRegular Constraints
Abstract
We formalize the problem of maximizing the meanpayoff value with high probability while satisfying a parity objective in a Markov decision process (MDP) with unknown probabilistic transition function and unknown reward function. Assuming the support of the unknown transition function and a lower bound on the minimal transition probability are known in advance, we show that in MDPs consisting of a single end component, two combinations of guarantees on the parity and meanpayoff objectives can be achieved depending on how much memory one is willing to use. (i) For all epsilon and gamma we can construct an onlinelearning finitememory strategy that almostsurely satisfies the parity objective and which achieves an epsilonoptimal mean payoff with probability at least 1  gamma. (ii) Alternatively, for all epsilon and gamma there exists an onlinelearning infinitememory strategy that satisfies the parity objective surely and which achieves an epsilonoptimal mean payoff with probability at least 1  gamma. We extend the above results to MDPs consisting of more than one end component in a natural way. Finally, we show that the aforementioned guarantees are tight, i.e. there are MDPs for which stronger combinations of the guarantees cannot be ensured.
BibTeX  Entry
@InProceedings{kretnsk_et_al:LIPIcs:2018:9546,
author = {Jan Kret{\'i}nsk{\'y} and Guillermo A. P{\'e}rez and JeanFran{\c{c}}ois Raskin},
title = {{LearningBased MeanPayoff Optimization in an Unknown MDP under OmegaRegular Constraints}},
booktitle = {29th International Conference on Concurrency Theory (CONCUR 2018)},
pages = {8:18:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {9783959770873},
ISSN = {18688969},
year = {2018},
volume = {118},
editor = {Sven Schewe and Lijun Zhang},
publisher = {Schloss DagstuhlLeibnizZentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2018/9546},
URN = {urn:nbn:de:0030drops95468},
doi = {10.4230/LIPIcs.CONCUR.2018.8},
annote = {Keywords: Markov decision processes, Reinforcement learning, Beyond worst case}
}
31.08.2018
Keywords: 

Markov decision processes, Reinforcement learning, Beyond worst case 
Seminar: 

29th International Conference on Concurrency Theory (CONCUR 2018)

Issue date: 

2018 
Date of publication: 

31.08.2018 