Minimizing Regret in Discounted-Sum Games

Authors Paul Hunter, Guillermo A. Pérez, Jean-François Raskin



PDF
Thumbnail PDF

File

LIPIcs.CSL.2016.30.pdf
  • Filesize: 0.61 MB
  • 17 pages

Document Identifiers

Author Details

Paul Hunter
Guillermo A. Pérez
Jean-François Raskin

Cite As Get BibTex

Paul Hunter, Guillermo A. Pérez, and Jean-François Raskin. Minimizing Regret in Discounted-Sum Games. In 25th EACSL Annual Conference on Computer Science Logic (CSL 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 62, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016) https://doi.org/10.4230/LIPIcs.CSL.2016.30

Abstract

In this paper, we study the problem of minimizing regret in discounted-sum games played on weighted game graphs.  We give algorithms for the general problem of computing the minimal regret of the controller (Eve) as well as several variants depending on which strategies the environment (Adam) is permitted to use.  We also consider the problem of synthesizing regret-free strategies for Eve in each of these scenarios.

Subject Classification

Keywords
  • Quantitative games
  • Regret
  • Verification
  • Synthesis
  • Game theory

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Benjamin Aminof, Orna Kupferman, and Robby Lampert. Reasoning about online algorithms with weighted automata. ACM Transactions on Algorithms, 2010. URL: http://dx.doi.org/10.1145/1721837.1721844.
  2. Krzysztof R. Apt and Erich Grädel. Lectures in game theory for computer scientists. Cambridge University Press, 2011. Google Scholar
  3. David E. Bell. Regret in decision making under uncertainty. Operations Research, 30(5):961-981, 1982. URL: http://dx.doi.org/10.1287/opre.30.5.961.
  4. Udi Boker and Thomas A. Henzinger. Exact and approximate determinization of discounted-sum automata. LMCS, 10(1), 2014. URL: http://dx.doi.org/10.2168/LMCS-10(1:10)2014.
  5. Ashok K. Chandra, Dexter Kozen, and Larry J. Stockmeyer. Alternation. J. ACM, 28(1):114-133, 1981. URL: http://dx.doi.org/10.1145/322234.322243.
  6. Krishnendu Chatterjee, Laurent Doyen, and Thomas A. Henzinger. Quantitative languages. ACM Transactions on Computational Logic, 11(4), 2010. URL: http://dx.doi.org/10.1145/1805950.1805953.
  7. Werner Damm and Bernd Finkbeiner. Does it pay to extend the perimeter of a world model? In FM, volume 6664 of LNCS, pages 12-26. Springer, 2011. URL: http://dx.doi.org/10.1007/978-3-642-21437-0_4.
  8. Emmanuel Filiot, Tristan Le Gall, and Jean-François Raskin. Iterated regret minimization in game graphs. In MFCS, volume 6281 of LNCS, pages 342-354. Springer, 2010. Google Scholar
  9. Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, 1979. Google Scholar
  10. Thomas A. Henzinger and Nir Piterman. Solving games without determinization. In CSL, pages 395-410, 2006. URL: http://dx.doi.org/10.1007/11874683_26.
  11. Paul Hunter, Guillermo A. Pérez, and Jean-François Raskin. Mean-payoff games with partial-observation - (extended abstract). In RP, pages 163-175, 2014. URL: http://dx.doi.org/10.1007/978-3-319-11439-2_13.
  12. Paul Hunter, Guillermo A. Pérez, and Jean-François Raskin. Minimizing regret in discounted-sum games. CoRR, abs/1511.00523, 2015. URL: http://arxiv.org/abs/1511.00523.
  13. Paul Hunter, Guillermo A Pérez, and Jean-François Raskin. Reactive synthesis without regret. Acta Informatica, pages 1-37, 2015. Google Scholar
  14. Daniel Dominic Sleator and Robert Endre Tarjan. Amortized efficiency of list update rules. In Proceedings of the 16th Annual ACM Symposium on Theory of Computing, April 30 - May 2, 1984, Washington, DC, USA, pages 488-492. ACM, 1984. Google Scholar
  15. Uri Zwick and Mike Paterson. The complexity of mean payoff games on graphs. TCS, 158(1):343-359, 1996. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail