Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach

Authors Grant Schoenebeck , Fang-Yi Yu

Thumbnail PDF


  • Filesize: 0.58 MB
  • 20 pages

Document Identifiers

Author Details

Grant Schoenebeck
  • University of Michigan, Ann Arbor, MI, USA
Fang-Yi Yu
  • Harvard University, Cambridge, MA, USA

Cite AsGet BibTex

Grant Schoenebeck and Fang-Yi Yu. Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 78:1-78:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Peer prediction mechanisms incentivize agents to truthfully report their signals even in the absence of verification by comparing agents' reports with those of their peers. In the detail-free multi-task setting, agents are asked to respond to multiple independent and identically distributed tasks, and the mechanism does not know the prior distribution of agents' signals. The goal is to provide an ε-strongly truthful mechanism where truth-telling rewards agents "strictly" more than any other strategy profile (with ε additive error) even for heterogeneous agents, and to do so while requiring as few tasks as possible. We design a family of mechanisms with a scoring function that maps a pair of reports to a score. The mechanism is strongly truthful if the scoring function is "prior ideal". Moreover, the mechanism is ε-strongly truthful as long as the scoring function used is sufficiently close to the ideal scoring function. This reduces the above mechanism design problem to a learning problem - specifically learning an ideal scoring function. Because learning the prior distribution is sufficient (but not necessary) to learn the scoring function, we can apply standard learning theory techniques that leverage side information about the prior (e.g., that it is close to some parametric model). Furthermore, we derive a variational representation of an ideal scoring function and reduce the learning problem into an empirical risk minimization. We leverage this reduction to obtain very general results for peer prediction in the multi-task setting. Specifically, - Sample Complexity: We show how to derive good bounds on the number of tasks required for different types of priors-in some cases exponentially improving previous results. In particular, we can upper bound the required number of tasks for parametric models with bounded learning complexity. Furthermore, our reduction applies to myriad continuous signal space settings. To the best of our knowledge, this is the first peer-prediction mechanism on continuous signals designed for the multi-task setting. - Connection to Machine Learning: We show how to turn a soft-predictor of an agent’s signals (given the other agents' signals) into a mechanism. This allows the practical use of machine learning algorithms that give good results even when many agents provide noisy information. - Stronger Properties: In the finite setting, we obtain ε-strongly truthful mechanisms for any stochastically relevant prior. Prior works either only apply to more restrictive settings, or achieve a weaker notion of truthfulness (informed truthfulness).

Subject Classification

ACM Subject Classification
  • Information systems → Incentive schemes
  • Theory of computation → Quality of equilibria
  • Mathematics of computing → Variational methods
  • Mathematics of computing → Information theory
  • Information elicitation without verification
  • crowdsourcing
  • machine learning


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Arpit Agarwal, Debmalya Mandal, David C Parkes, and Nisarg Shah. Peer prediction with heterogeneous users. In Proceedings of the 2017 ACM Conference on Economics and Computation, pages 81-98. ACM, June 2017. Google Scholar
  2. Syed Mumtaz Ali and Samuel D Silvey. A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society: Series B (Methodological), 28(1):131-142, 1966. Google Scholar
  3. Imre Csiszár. Eine informationstheoretische ungleichung und ihre anwendung auf beweis der ergodizitaet von markoffschen ketten. Magyer Tud. Akad. Mat. Kutato Int. Koezl., 8:85-108, 1964. Google Scholar
  4. Imre Csiszár. Generalized projections for non-negative functions. Acta Mathematica Hungarica, 68(1-2):161-186, 1995. Google Scholar
  5. Anirban Dasgupta and Arpita Ghosh. Crowdsourced judgement elicitation with endogenous proficiency. In Proceedings of the 22nd international conference on World Wide Web, pages 319-330. ACM, 2013. Google Scholar
  6. Luc Devroye and Gábor Lugosi. Combinatorial methods in density estimation. Springer Science & Business Media, 2012. Google Scholar
  7. Boi Faltings and Goran Radanovic. Game theory for data science: eliciting truthful information. Synthesis Lectures on Artificial Intelligence and Machine Learning, 11(2):1-151, 2017. Google Scholar
  8. Werner Fenchel. On conjugate convex functions. Canadian Journal of Mathematics, 1(1):73-77, 1949. Google Scholar
  9. Alice Gao, James R Wright, and Kevin Leyton-Brown. Incentivizing evaluation via limited access to ground truth: Peer-prediction makes things worse. Workshop on Algorithmic Game Theory and Data Science at ACM Conference on Economics and Computation, 2016. Google Scholar
  10. Naman Goel and Boi Faltings. Personalized peer truth serum for eliciting multi-attribute personal data. In UAI, 2019. Google Scholar
  11. Yuqing Kong. Dominantly truthful multi-task peer prediction with a constant number of tasks. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2398-2411. SIAM, 2020. Google Scholar
  12. Yuqing Kong, Katrina Ligett, and Grant Schoenebeck. Putting peer prediction under the micro (economic) scope and making truth-telling focal. In International Conference on Web and Internet Economics, pages 251-264. Springer, 2016. Google Scholar
  13. Yuqing Kong and Grant Schoenebeck. Eliciting expertise without verification. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 195-212. ACM, 2018. Google Scholar
  14. Yuqing Kong and Grant Schoenebeck. Equilibrium selection in information elicitation without verification via information monotonicity. In 9th Innovations in Theoretical Computer Science Conference, 2018. Google Scholar
  15. Yuqing Kong and Grant Schoenebeck. Water from two rocks: Maximizing the mutual information. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 177-194. ACM, 2018. Google Scholar
  16. Yuqing Kong and Grant Schoenebeck. An information theoretic framework for designing information elicitation mechanisms that reward truth-telling. ACM Transactions on Economics and Computation (TEAC), 7(1):2, 2019. Google Scholar
  17. Yuqing Kong, Grant Schoenebeck, Biaoshuai Tao, and Fang-Yi Yu. Information elicitation mechanisms for statistical estimation. In AAAI, pages 2095-2102, 2020. Google Scholar
  18. Yang Liu and Yiling Chen. Machine-learning aided peer prediction. In Proceedings of the 2017 ACM Conference on Economics and Computation, EC '17, pages 63-80, New York, NY, USA, 2017. ACM. URL: https://doi.org/10.1145/3033274.3085126.
  19. Yang Liu and Yiling Chen. Surrogate scoring rules and a dominant truth serum for information elicitation. CoRR, abs/1802.09158, 2018. URL: http://arxiv.org/abs/1802.09158.
  20. N. Miller, P. Resnick, and R. Zeckhauser. Eliciting informative feedback: The peer-prediction method. Management Science, pages 1359-1373, 2005. Google Scholar
  21. Tetsuzo Morimoto. Markov processes and the h-theorem. Journal of the Physical Society of Japan, 18(3):328-331, 1963. URL: https://doi.org/10.1143/JPSJ.18.328.
  22. XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847-5861, 2010. Google Scholar
  23. D. Prelec. A Bayesian Truth Serum for subjective data. Science, 306(5695):462-466, 2004. Google Scholar
  24. Goran Radanovic and Boi Faltings. A robust bayesian truth serum for non-binary signals. In Marie desJardins and Michael L. Littman, editors, Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, July 14-18, 2013, Bellevue, Washington, USA. AAAI Press, 2013. URL: http://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/view/6451.
  25. Goran Radanovic and Boi Faltings. Incentives for truthful information elicitation of continuous signals. In Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014. Google Scholar
  26. Ralph Tyrell Rockafellar. Convex analysis. Princeton university press, 2015. Google Scholar
  27. Grant Schoenebeck and Fang-Yi Yu. Two strongly truthful mechanisms for three heterogeneous agents answering one question. In International Conference on Web and Internet Economics. Springer, 2020. Google Scholar
  28. Victor Shnayder, Arpit Agarwal, Rafael Frongillo, and David C Parkes. Informed truthfulness in Multi-Task peer prediction. In Proceedings of the 2016 ACM Conference on Economics and Computation, EC '16, pages 179-196, New York, NY, USA, 2016. ACM. Google Scholar
  29. Sara A van de Geer and Sara van de Geer. Empirical Processes in M-estimation, volume 6. Cambridge university press, 2000. Google Scholar
  30. Jon Wellner et al. Weak convergence and empirical processes: with applications to statistics. Springer Science & Business Media, 2013. Google Scholar
  31. Jens Witkowski and David C Parkes. Peer prediction without a common prior. In Proceedings of the 13th ACM Conference on Electronic Commerce, pages 964-981. ACM, 2012. Google Scholar
  32. Peter Zhang and Yiling Chen. Elicitability and knowledge-free elicitation with peer prediction. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pages 245-252, 2014. Google Scholar
  33. Yuchen Zhang, Xi Chen, Dengyong Zhou, and Michael I. Jordan. Spectral methods meet EM: A provably optimal algorithm for crowdsourcing. J. Mach. Learn. Res., 17:102:1-102:44, 2016. URL: http://jmlr.org/papers/v17/14-511.html.
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail