Optimizing Bayesian Information Revelation Strategy in Prediction Markets: the Alice Bob Alice Case

Authors Yuqing Kong, Grant Schoenebeck

Thumbnail PDF


  • Filesize: 3.07 MB
  • 20 pages

Document Identifiers

Author Details

Yuqing Kong
Grant Schoenebeck

Cite AsGet BibTex

Yuqing Kong and Grant Schoenebeck. Optimizing Bayesian Information Revelation Strategy in Prediction Markets: the Alice Bob Alice Case. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 14:1-14:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Prediction markets provide a unique and compelling way to sell and aggregate information, yet a good understanding of optimal strategies for agents participating in such markets remains elusive. To model this complex setting, prior work proposes a three stages game called the Alice Bob Alice (A-B-A) game - Alice participates in the market first, then Bob joins, and then Alice has a chance to participate again. While prior work has made progress in classifying the optimal strategy for certain interesting edge cases, it remained an open question to calculate Alice's best strategy in the A-B-A game for a general information structure. In this paper, we analyze the A-B-A game for a general information structure and (1) show a "revelation-principle" style result: it is enough for Alice to use her private signal space as her announced signal space, that is, Alice cannot gain more by revealing her information more "finely"; (2) provide a FPTAS to compute the optimal information revelation strategy with additive error when Alice's information is a signal from a constant-sized set; (3) show that sometimes it is better for Alice to reveal partial information in the first stage even if Alice's information is a single binary bit.
  • prediction market
  • information revelation
  • optimization


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Yossi Azar, Amir Ban, and Yishay Mansour. When should an expert make a prediction? In Proceedings of the 2016 ACM Conference on Economics and Computation, pages 125-142. ACM, 2016. Google Scholar
  2. Yakov Babichenko and Siddharth Barman. Computational aspects of private bayesian persuasion. arXiv preprint arXiv:1603.01444, 2016. Google Scholar
  3. Yiling Chen, Stanko Dimitrov, Rahul Sami, Daniel M Reeves, David M Pennock, Robin D Hanson, Lance Fortnow, and Rica Gonen. Gaming prediction markets: Equilibrium strategies with a market maker. Algorithmica, 58(4):930-969, 2010. Google Scholar
  4. Yiling Chen, Daniel M Reeves, David M Pennock, Robin D Hanson, Lance Fortnow, and Rica Gonen. Bluffing and strategic reticence in prediction markets. In International Workshop on Web and Internet Economics, pages 70-81. Springer, 2007. Google Scholar
  5. Yiling Chen and Bo Waggoner. Informational substitutes. In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pages 239-247. IEEE, 2016. Google Scholar
  6. Thomas M. Cover and Joy A. Thomas. Elements of information theory (2. ed.). Wiley, 2006. Google Scholar
  7. Shaddin Dughmi and Haifeng Xu. Algorithmic bayesian persuasion. arXiv preprint arXiv:1503.05988, 2015. Google Scholar
  8. Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359-378, 2007. Google Scholar
  9. Robin Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1):107-119, 2003. Google Scholar
  10. Robin Hanson. Logarithmic markets coring rules for modular combinatorial information aggregation. The Journal of Prediction Markets, 1(1):3-15, 2012. Google Scholar
  11. Emir Kamenica and Matthew Gentzkow. Bayesian persuasion. The American Economic Review, 101(6):2590-2615, 2011. Google Scholar
  12. David G Luenberger. Introduction to linear and nonlinear programming, volume 28. Addison-Wesley Reading, MA, 1973. Google Scholar
  13. Robert L Winkler. Scoring rules and the evaluation of probability assessors. Journal of the American Statistical Association, 64(327):1073-1078, 1969. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail