Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization

Authors Constantinos Daskalakis, Ioannis Panageas

Thumbnail PDF


  • Filesize: 0.61 MB
  • 18 pages

Document Identifiers

Author Details

Constantinos Daskalakis
  • CSAIL, MIT, Cambridge MA, USA
Ioannis Panageas
  • ISTD, SUTD, Singapore

Cite AsGet BibTex

Constantinos Daskalakis and Ioannis Panageas. Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 27:1-27:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent work of Daskalakis et al [Daskalakis et al., ICLR, 2018] and follow-up work of Liang and Stokes [Liang and Stokes, 2018] have established that a variant of the widely used Gradient Descent/Ascent procedure, called "Optimistic Gradient Descent/Ascent (OGDA)", exhibits last-iterate convergence to saddle points in unconstrained convex-concave min-max optimization problems. We show that the same holds true in the more general problem of constrained min-max optimization under a variant of the no-regret Multiplicative-Weights-Update method called "Optimistic Multiplicative-Weights Update (OMWU)". This answers an open question of Syrgkanis et al [Syrgkanis et al., NIPS, 2015]. The proof of our result requires fundamentally different techniques from those that exist in no-regret learning literature and the aforementioned papers. We show that OMWU monotonically improves the Kullback-Leibler divergence of the current iterate to the (appropriately normalized) min-max solution until it enters a neighborhood of the solution. Inside that neighborhood we show that OMWU becomes a contracting map converging to the exact solution. We believe that our techniques will be useful in the analysis of the last iterate of other learning algorithms.

Subject Classification

ACM Subject Classification
  • Theory of computation → Convergence and learning in games
  • No regret learning
  • Zero-sum games
  • Convergence
  • Dynamical Systems
  • KL divergence


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Ilan Adler. The equivalence of linear programs and zero-sum games. In International Journal of Game Theory, pages 165-177, 2013. Google Scholar
  2. James P. Bailey and Georgios Piliouras. Multiplicative Weights Update in Zero-Sum Games. In Proceedings of the 2018 ACM Conference on Economics and Computation, Ithaca, NY, USA, June 18-22, 2018, pages 321-338, 2018. Google Scholar
  3. David Blackwell. An analog of the minimax theorem for vector payoffs. In Pacific J. Math., pages 1-8, 1956. Google Scholar
  4. G.W Brown. Iterative Solutions of Games by Fictitious Play. In Activity Analysis of Production and Allocation, 1951. Google Scholar
  5. Nikolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. Google Scholar
  6. Eric Van Damme. Stability and perfection of Nash equilibria. Springer, 1991. Google Scholar
  7. George B Dantzig. A proof of the equivalence of the programming problem and the game problem. Activity analysis of production and allocation, pages 330-338, 1951. Google Scholar
  8. Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training GANs with Optimism. In Proceedings of ICLR, 2018. URL:
  9. Oded Galor. Discrete Dynamical Systems. Springer, 2007. Google Scholar
  10. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680, 2014. Google Scholar
  11. Tengyuan Liang and James Stokes. Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks. arXiv preprint:1802.06132, 2018. Google Scholar
  12. Ruta Mehta, Ioannis Panageas, Georgios Piliouras, Prasad Tetali, and Vijay V. Vazirani. Mutation, Sexual Reproduction and Survival in Dynamic Environments. In 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, pages 16:1-16:29, 2017. Google Scholar
  13. Panayotis Mertikopoulos, Christos Papadimitriou, and Georgios Piliouras. Cycles in Adversarial Regularized Learning. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018, pages 2703-2717, 2018. Google Scholar
  14. Panayotis Mertikopoulos, Houssam Zenati, Bruno Lecouat, Chuan-Sheng Foo, Vijay Chandrasekhar, and Georgios Piliouras. Mirror descent in saddle-point problems: Going the extra (gradient) mile. CoRR, abs/1807.02629, 2018. Google Scholar
  15. Yurii Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127-152, 2005. Google Scholar
  16. Gerasimos Palaiopanos, Ioannis Panageas, and Georgios Piliouras. Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5874-5884, 2017. Google Scholar
  17. Alexander Rakhlin and Karthik Sridharan. Online Learning with Predictable Sequences. In COLT 2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton University, NJ, USA, pages 993-1019, 2013. Google Scholar
  18. J. Robinson. An Iterative Method of Solving a Game. In Annals of Mathematics, pages 296-301, 1951. Google Scholar
  19. Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E. Schapire. Fast Convergence of Regularized Learning in Games. In Annual Conference on Neural Information Processing Systems 2015, pages 2989-2997, 2015. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail