Smooth Nash Equilibria: Algorithms and Complexity

Authors Constantinos Daskalakis, Noah Golowich, Nika Haghtalab, Abhishek Shetty



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2024.37.pdf
  • Filesize: 0.86 MB
  • 22 pages

Document Identifiers

Author Details

Constantinos Daskalakis
  • MIT, Cambridge, MA, USA
Noah Golowich
  • MIT, Cambridge, MA, USA
Nika Haghtalab
  • University of California at Berkeley, CA, USA
Abhishek Shetty
  • University of California at Berkeley, CA, USA

Acknowledgements

This work was done in part while the authors were visiting the Learning and Games program at the Simons Institute for the Theory of Computing.

Cite AsGet BibTex

Constantinos Daskalakis, Noah Golowich, Nika Haghtalab, and Abhishek Shetty. Smooth Nash Equilibria: Algorithms and Complexity. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 37:1-37:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/LIPIcs.ITCS.2024.37

Abstract

A fundamental shortcoming of the concept of Nash equilibrium is its computational intractability: approximating Nash equilibria in normal-form games is PPAD-hard. In this paper, inspired by the ideas of smoothed analysis, we introduce a relaxed variant of Nash equilibrium called σ-smooth Nash equilibrium, for a {smoothness parameter} σ. In a σ-smooth Nash equilibrium, players only need to achieve utility at least as high as their best deviation to a σ-smooth strategy, which is a distribution that does not put too much mass (as parametrized by σ) on any fixed action. We distinguish two variants of σ-smooth Nash equilibria: strong σ-smooth Nash equilibria, in which players are required to play σ-smooth strategies under equilibrium play, and weak σ-smooth Nash equilibria, where there is no such requirement. We show that both weak and strong σ-smooth Nash equilibria have superior computational properties to Nash equilibria: when σ as well as an approximation parameter ϵ and the number of players are all constants, there is a {constant-time} randomized algorithm to find a weak ϵ-approximate σ-smooth Nash equilibrium in normal-form games. In the same parameter regime, there is a polynomial-time deterministic algorithm to find a strong ϵ-approximate σ-smooth Nash equilibrium in a normal-form game. These results stand in contrast to the optimal algorithm for computing ϵ-approximate Nash equilibria, which cannot run in faster than quasipolynomial-time, subject to complexity-theoretic assumptions. We complement our upper bounds by showing that when either σ or ϵ is an inverse polynomial, finding a weak ϵ-approximate σ-smooth Nash equilibria becomes computationally intractable. Our results are the first to propose a variant of Nash equilibrium which is computationally tractable, allows players to act independently, and which, as we discuss, is justified by an extensive line of work on individual choice behavior in the economics literature.

Subject Classification

ACM Subject Classification
  • Theory of computation → Exact and approximate computation of equilibria
  • Theory of computation → Algorithmic game theory
Keywords
  • Nash equilibrium
  • smoothed analysis
  • PPAD

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Noga Alon, Troy Lee, and Adi Shraibman. The cover number of a matrix and its algorithmic applications. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2014. Google Scholar
  2. Simon P. Anderson, Jacob K. Goeree, and Charles A. Holt. The logit equilibrium: A perspective on intuitive behavioral anomalies. Southern Economic Journal, 69(1):21-47, 2002. Google Scholar
  3. Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta-algorithm and applications. Theory of Computing, 8(6):121-164, 2012. URL: https://doi.org/10.4086/toc.2012.v008a006.
  4. Robert J. Aumann. Correlated equilibrium as an expression of bayesian rationality. Econometrica, 55(1):1-18, 1987. Google Scholar
  5. Yakov Babichenko. Informational bounds on equilibria (a survey). SIGecom Exch., 17(2):25-45, January 2020. Google Scholar
  6. Yakov Babichenko, Siddharth Barman, and Ron Peretz. Empirical distribution of equilibrium play and its testing application. Math. Oper. Res., 42(1):15-29, 2017. URL: https://doi.org/10.1287/moor.2016.0794.
  7. Yakov Babichenko and Aviad Rubinstein. Communication complexity of approximate nash equilibria. Games Econ. Behav., 134:376-398, 2022. URL: https://doi.org/10.1016/j.geb.2020.07.005.
  8. Anton Bakhtin, David J. Wu, Adam Lerer, Jonathan Gray, Athul Paul Jacob, Gabriele Farina, Alexander H. Miller, and Noam Brown. Mastering the game of no-press diplomacy via human-regularized reinforcement learning and planning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL: https://openreview.net/pdf?id=F61FwJTZhb.
  9. Boaz Barak, Moritz Hardt, and Satyen Kale. The uniform hardcore lemma via approximate bregman projections. In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2009, New York, NY, USA, January 4-6, 2009, pages 1193-1200. SIAM, 2009. URL: https://doi.org/10.1137/1.9781611973068.129.
  10. Avraham Beja. Imperfect equilibrium. Games and Economic Behavior, 4(1):18-36, 1992. Google Scholar
  11. Alankrita Bhatt, Nika Haghtalab, and Abhishek Shetty. Smoothed analysis of sequential probability assignment, 2023. URL: https://arxiv.org/abs/2303.04845.
  12. Nir Bitansky, Arka Rai Choudhuri, Justin Holmgren, Chethan Kamath, Alex Lombardi, Omer Paneth, and Ron D. Rothblum. PPAD is as hard as LWE and iterated squaring. In Theory of Cryptography - 20th International Conference, TCC 2022, Chicago, IL, USA, November 7-10, 2022, Proceedings, Part II, volume 13748 of Lecture Notes in Computer Science, pages 593-622. Springer, 2022. URL: https://doi.org/10.1007/978-3-031-22365-5_21.
  13. Adam Block, Yuval Dagan, Noah Golowich, and Alexander Rakhlin. Smoothed online learning is as easy as statistical learning. In Conference on Learning Theory, 2-5 July 2022, London, UK, volume 178 of Proceedings of Machine Learning Research, pages 1716-1786. PMLR, 2022. URL: https://proceedings.mlr.press/v178/block22a.html.
  14. Adam Block and Yury Polyanskiy. The sample complexity of approximate rejection sampling with applications to smoothed online learning. In The Thirty Sixth Annual Conference on Learning Theory, COLT 2023, 12-15 July 2023, Bangalore, India, volume 195 of Proceedings of Machine Learning Research, pages 228-273. PMLR, 2023. URL: https://proceedings.mlr.press/v195/block23a.html.
  15. Adam Block, Max Simchowitz, and Alexander Rakhlin. Oracle-efficient smoothed online learning for piecewise continuous decision making. In The Thirty Sixth Annual Conference on Learning Theory, COLT 2023, 12-15 July 2023, Bangalore, India, volume 195 of Proceedings of Machine Learning Research, pages 1618-1665. PMLR, 2023. URL: https://proceedings.mlr.press/v195/block23b.html.
  16. Adam Block, Max Simchowitz, and Russ Tedrake. Smoothed online learning for prediction in piecewise affine systems. CoRR, abs/2301.11187, 2023. URL: https://doi.org/10.48550/arXiv.2301.11187.
  17. Shant Boodaghians, Joshua Brakensiek, Samuel B. Hopkins, and Aviad Rubinstein. Smoothed complexity of 2-player nash equilibria. In 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020, Durham, NC, USA, November 16-19, 2020, pages 271-282. IEEE, 2020. URL: https://doi.org/10.1109/FOCS46700.2020.00034.
  18. Xi Chen, Xiaotie Deng, and Shang-Hua Teng. Settling the complexity of computing two-player nash equilibria. J. ACM, 56(3):14:1-14:57, 2009. URL: https://doi.org/10.1145/1516512.1516516.
  19. Vincent Conitzer and Tuomas Sandholm. Communication complexity as a lower bound for learning in games. In Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004, volume 69 of ACM International Conference Proceeding Series. ACM, 2004. URL: https://doi.org/10.1145/1015330.1015351.
  20. Constantinos Daskalakis. On the complexity of approximating a nash equilibrium. ACM Trans. Algorithms, 9(3):23:1-23:35, 2013. URL: https://doi.org/10.1145/2483699.2483703.
  21. Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou. The complexity of computing a nash equilibrium. SIAM J. Comput., 39(1):195-259, 2009. URL: https://doi.org/10.1137/070699652.
  22. Constantinos Daskalakis, Noah Golowich, Nika Haghtalab, and Abhishek Shetty. Smooth nash equilibria: Algorithms and complexity, 2023. URL: https://arxiv.org/abs/2309.12226.
  23. Meta Fundamental AI Research Diplomacy Team (FAIR)†, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, Athul Paul Jacob, Mojtaba Komeili, Karthik Konath, Minae Kwon, Adam Lerer, Mike Lewis, Alexander H. Miller, Sasha Mitts, Adithya Renduchintala, Stephen Roller, Dirk Rowe, Weiyan Shi, Joe Spisak, Alexander Wei, David Wu, Hugh Zhang, and Markus Zijlstra. Human-level play in the game of diplomacy by combining language models with strategic reasoning. Science, 378(6624):1067-1074, 2022. URL: https://doi.org/10.1126/science.ade9097.
  24. Gabriele Farina, Chung-Wei Lee, Haipeng Luo, and Christian Kroer. Kernelized multiplicative weights for 0/1-polyhedral games: Bridging the gap between learning in extensive-form and normal-form games. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 6337-6357. PMLR, 17-23 July 2022. Google Scholar
  25. John Fearnley, Martin Gairing, Paul Goldberg, and Rahul Savani. Learning equilibria of games via payoff queries, 2014. URL: https://arxiv.org/abs/1302.3116.
  26. John Fearnley and Rahul Savani. Finding approximate nash equilibria of bimatrix games via payoff queries. ACM Trans. Economics and Comput., 4(4):25:1-25:19, 2016. URL: https://doi.org/10.1145/2956579.
  27. Jerzy Filar and Koos Vrieze. Competitive Markov Decision Processes. Springer, 1997. Google Scholar
  28. Sanjam Garg, Omkant Pandey, and Akshayaram Srinivasan. Revisiting the cryptographic hardness of finding a nash equilibrium. In Advances in Cryptology - CRYPTO 2016 - 36th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 14-18, 2016, Proceedings, Part II, volume 9815 of Lecture Notes in Computer Science, pages 579-604. Springer, 2016. URL: https://doi.org/10.1007/978-3-662-53008-5_20.
  29. Jacob K. Goeree and Charles A. Holt. Asymmetric inequality aversion and noisy behavior in alternating-offer bargaining games. European Economic Review, 44(4):1079-1089, 2000. Google Scholar
  30. Jacob K. Goeree, Charles A. Holt, and Thomas R. Palfrey. Quantal response equilibrium and overbidding in private-value auctions. Journal of Economic Theory, 104(1):247-272, 2002. Google Scholar
  31. Jacob K. Goeree, Charles A. Holt, and Thomas R. Palfrey. Quantal Response Equilibrium: A Stochastic Theory of Games. Princeton University Press, 2016. Google Scholar
  32. Mika Göös and Aviad Rubinstein. Near-optimal communication lower bounds for approximate nash equilibria. In 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, October 7-9, 2018, pages 397-403. IEEE Computer Society, 2018. URL: https://doi.org/10.1109/FOCS.2018.00045.
  33. Geoffrey J. Gordon, Amy Greenwald, and Casey Marks. No-regret learning in convex games. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, pages 360-367, New York, NY, USA, 2008. Association for Computing Machinery. Google Scholar
  34. Nika Haghtalab, Yanjun Han, Abhishek Shetty, and Kunhe Yang. Oracle-efficient online learning for smoothed adversaries. In NeurIPS, 2022. URL: http://papers.nips.cc/paper_files/paper/2022/hash/1a04df6a405210aab4986994b873db9b-Abstract-Conference.html.
  35. Nika Haghtalab, Tim Roughgarden, and Abhishek Shetty. Smoothed analysis of online and differentially private learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL: https://proceedings.neurips.cc/paper/2020/hash/685bfde03eb646c27ed565881917c71c-Abstract.html.
  36. Nika Haghtalab, Tim Roughgarden, and Abhishek Shetty. Smoothed analysis with adaptive adversaries. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, Denver, CO, USA, February 7-10, 2022, pages 942-953. IEEE, 2021. URL: https://doi.org/10.1109/FOCS52979.2021.00095.
  37. Philip A. Haile, Ali Hortaçsu, and Grigory Kosenok. On the empirical content of quantal response equilibrium. American Economic Review, 98(1):180-200, March 2008. Google Scholar
  38. Sergiu Hart and Yishay Mansour. How long to equilibrium? the communication complexity of uncoupled equilibrium procedures. Games Econ. Behav., 69(1):107-126, 2010. URL: https://doi.org/10.1016/j.geb.2007.12.002.
  39. Sébastien Hémon, Michel de Rougemont, and Miklos Santha. Approximate nash equilibria for multi-player games. In International Symposium on Algorithmic Game Theory, pages 267-278. Springer, 2008. Google Scholar
  40. Charles A. Holt and Alvin E. Roth. The nash equilibrium: A perspective. Proceedings of the National Academy of Sciences, 101(12):3999-4002, 2004. URL: https://doi.org/10.1073/pnas.0308738101.
  41. Russell Impagliazzo. Hard-core distributions for somewhat hard problems. In 36th Annual Symposium on Foundations of Computer Science, Milwaukee, Wisconsin, USA, 23-25 October 1995, pages 538-545. IEEE Computer Society, 1995. URL: https://doi.org/10.1109/SFCS.1995.492584.
  42. Athul Paul Jacob, David J. Wu, Gabriele Farina, Adam Lerer, Hengyuan Hu, Anton Bakhtin, Jacob Andreas, and Noam Brown. Modeling strong and human-like gameplay with kl-regularized search. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 9695-9728. PMLR, 2022. URL: https://proceedings.mlr.press/v162/jacob22a.html.
  43. Satyen Kale. Boosting and hard-core set constructions: a simplified approach. Electron. Colloquium Comput. Complex., TR07-131, 2007. URL: https://arxiv.org/abs/TR07-131.
  44. Richard J. Lipton, Evangelos Markakis, and Aranyak Mehta. Playing large games using simple strategies. In Proceedings of the 4th ACM Conference on Electronic Commerce, EC '03, pages 36-41, New York, NY, USA, 2003. Association for Computing Machinery. URL: https://doi.org/10.1145/779928.779933.
  45. R. Duncan Luce. Individual Choice Behavior: A Theoretical analysis. Wiley, New York, NY, USA, 1959. Google Scholar
  46. Yishay Mansour, Mehryar Mohri, Jon Schneider, and Balasubramanian Sivan. Strategizing against learners in bayesian games. In Proceedings of Thirty Fifth Conference on Learning Theory, volume 178 of Proceedings of Machine Learning Research, pages 5221-5252. PMLR, 02-05 July 2022. Google Scholar
  47. Luke Marris, Ian Gemp, Thomas Anthony, Andrea Tacchetti, Siqi Liu, and Karl Tuyls. Turbocharging solution concepts: Solving nes, ces and cces with neural equilibrium solvers. In NeurIPS, 2022. URL: http://papers.nips.cc/paper_files/paper/2022/hash/24f420aa4c99642dbb9aae18b166bbbc-Abstract-Conference.html.
  48. Daniel L. McFadden. Quantal Choice Analysis: A Survey. In Annals of Economic and Social Measurement, Volume 5, number 4, NBER Chapters, pages 363-390. National Bureau of Economic Research, Inc, October 1976. URL: https://ideas.repec.org/h/nbr/nberch/10488.html.
  49. Richard D. McKelvey and Thomas R. Palfrey. Quantal response equilibria for normal form games. Games and Economic Behavior, 10(1):6-38, 1995. Google Scholar
  50. Hervé Moulin and Jean-Philippe Vial. Strategically zero-sum games: The class of games whose completely mixed equilibria cannot be improved upon. International Journal of Game Theory, 7:201-221, 1978. Google Scholar
  51. R. B. Myerson. Refinements of the nash equilibrium concept. Int. J. Game Theory, 7(2):73-80, June 1978. Google Scholar
  52. Roger B. Myerson. Nash equilibrium and the history of economic theory. Journal of Economic Literature, 37(3):1067-1082, September 1999. URL: https://doi.org/10.1257/jel.37.3.1067.
  53. Julien Pérolat, Bart De Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T. Connor, Neil Burch, Thomas W. Anthony, Stephen McAleer, Romuald Elie, Sarah H. Cen, Zhe Wang, Audrunas Gruslys, Aleksandra Malysheva, Mina Khan, Sherjil Ozair, Finbarr Timbers, Toby Pohlen, Tom Eccles, Mark Rowland, Marc Lanctot, Jean-Baptiste Lespiau, Bilal Piot, Shayegan Omidshafiei, Edward Lockhart, Laurent Sifre, Nathalie Beauguerlange, Rémi Munos, David Silver, Satinder Singh, Demis Hassabis, and Karl Tuyls. Mastering the game of stratego with model-free multiagent reinforcement learning. CoRR, abs/2206.15378, 2022. URL: https://doi.org/10.48550/arXiv.2206.15378.
  54. Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Stochastic, constrained, and smoothed adversaries. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc., 2011. Google Scholar
  55. J. B. Rosen. Existence and uniqueness of equilibrium points for concave n-person games. Econometrica, 33(3):520-534, 1965. Google Scholar
  56. Robert Rosenthal. A bounded-rationality approach to the study of noncooperative games. International Journal of Game Theory, 18(3):273-91, 1989. Google Scholar
  57. Aviad Rubinstein. Inapproximability of nash equilibrium. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, STOC '15, pages 409-418, New York, NY, USA, 2015. Association for Computing Machinery. Google Scholar
  58. Aviad Rubinstein. Settling the complexity of computing approximate two-player nash equilibria. SIGecom Exch., 15(2):45-49, 2017. URL: https://doi.org/10.1145/3055589.3055596.
  59. Robert E. Schapire and Yoav Freund. Boosting: Foundations and Algorithms. The MIT Press, 2012. Google Scholar
  60. Reinhard Selten. Reexamination of the perfectness concept for equilibrium points in extensive games. International Journal of Game Theory, 4:25-55, 1975. Google Scholar
  61. Rocco A. Servedio. Smooth boosting and learning with malicious noise. In Computational Learning Theory, 14th Annual Conference on Computational Learning Theory, COLT 2001 and 5th European Conference on Computational Learning Theory, EuroCOLT 2001, Amsterdam, The Netherlands, July 16-19, 2001, Proceedings, volume 2111 of Lecture Notes in Computer Science, pages 473-489. Springer, 2001. URL: https://doi.org/10.1007/3-540-44581-1_31.
  62. Samuel Sokota, Ryan D'Orazio, J. Zico Kolter, Nicolas Loizou, Marc Lanctot, Ioannis Mitliagkas, Noam Brown, and Christian Kroer. A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games, 2023. URL: https://arxiv.org/abs/2206.05825.
  63. Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis: an attempt to explain the behavior of algorithms in practice. Commun. ACM, 52(10):76-84, 2009. URL: https://doi.org/10.1145/1562764.1562785.
  64. Eric van Damme. Stability and Perfection of Nash Equilibria. Springer-Verlag, Berlin, Heidelberg, 1987. Google Scholar
  65. Kaiqing Zhang, Zhuoran Yang, and Tamer Başar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Handbook of Reinforcement Learning and Control, 2019. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail