Document Open Access Logo

Equilibrium Computation, Deep Learning, and Multi-Agent Reinforcement Learning (Invited Talk)

Author Constantinos Daskalakis

Thumbnail PDF


  • Filesize: 375 kB
  • 1 pages

Document Identifiers

Author Details

Constantinos Daskalakis
  • EECS and CSAIL, MIT, Cambridge, MA, USA

Cite AsGet BibTex

Constantinos Daskalakis. Equilibrium Computation, Deep Learning, and Multi-Agent Reinforcement Learning (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, p. 2:1, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Machine Learning has recently made significant advances in challenges such as speech and image recognition, automatic translation, and text generation, much of that progress being fueled by the success of gradient descent-based optimization methods in computing local optima of non-convex objectives. From robustifying machine learning models against adversarial attacks to causal inference, training generative models, multi-robot interactions, and learning in strategic environments, many outstanding challenges in Machine Learning lie at its interface with Game Theory. On this front, however, gradient-descent based optimization methods have been less successful. Here, the role of single-objective optimization is played by equilibrium computation, but gradient-descent based methods commonly fail to find equilibria, and even computing local approximate equilibria has remained daunting. We shed light on these challenges through a combination of learning-theoretic, complexity-theoretic, game-theoretic and topological techniques, presenting obstacles and opportunities for Machine Learning and Game Theory going forward. I will assume no Deep Learning background for this talk and present results from joint works with S. Skoulakis and M. Zampetakis [Daskalakis et al., 2021] as well as with N. Golowich and K. Zhang [Daskalakis et al., 2022].

Subject Classification

ACM Subject Classification
  • Theory of computation → Multi-agent learning
  • Theory of computation → Multi-agent reinforcement learning
  • Theory of computation → Solution concepts in game theory
  • Theory of computation → Exact and approximate computation of equilibria
  • Deep Learning
  • Multi-Agent (Reinforcement) Learning
  • Game Theory
  • Nonconvex Optimization
  • PPAD


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Constantinos Daskalakis, Noah Golowich, and Kaiqing Zhang. The complexity of Markov equilibrium in stochastic games. arXiv preprint, 2022. URL:
  2. Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. The complexity of constrained min-max optimization. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 1466-1478, 2021. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail