Tight Lower Bounds for Multiplicative Weights Algorithmic Families

Authors Nick Gravin, Yuval Peres, Balasubramanian Sivan



PDF
Thumbnail PDF

File

LIPIcs.ICALP.2017.48.pdf
  • Filesize: 0.53 MB
  • 14 pages

Document Identifiers

Author Details

Nick Gravin
Yuval Peres
Balasubramanian Sivan

Cite As Get BibTex

Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Tight Lower Bounds for Multiplicative Weights Algorithmic Families. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 48:1-48:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017) https://doi.org/10.4230/LIPIcs.ICALP.2017.48

Abstract

We study the fundamental problem of prediction with expert advice and develop regret lower bounds for a large family of algorithms for this problem. We develop simple adversarial primitives, that lend themselves to various combinations leading to sharp lower bounds for many algorithmic families. We use these primitives to show that the classic Multiplicative Weights Algorithm (MWA) has a regret of (T*ln(k)/2)^{0.5} (where T is the time horizon and k is the number of experts), there by completely closing the gap between upper and lower bounds. We further show a regret lower bound of (2/3)* (T*ln(k)/2)^{0.5} for a much more general family of algorithms than MWA, where the learning rate can be arbitrarily varied over time, or even picked from arbitrary distributions over time. We also use our primitives to construct adversaries in the geometric horizon setting for MWA to precisely characterize the regret at 0.391/(\delta)^{0.5} for the case of 2 experts and a lower bound of (1/2)*(ln(k)/(2*\delta))^{0.5}, for the case of arbitrary number of experts k (here \delta is the probability that the game ends in any given round).

Subject Classification

Keywords
  • Multiplicative Weights
  • Lower Bounds
  • Adversarial Primitives

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Jacob Abernethy, Alekh Agarwal, Peter L. Bartlett, and Alexander Rakhlin. A stochastic view of optimal regret through minimax duality. In COLT 2009 - The 22nd Conference on Learning Theory, Montreal, Quebec, Canada, June 18-21, 2009, 2009. Google Scholar
  2. Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal stragies and minimax lower bounds for online convex games. In 21st Annual Conference on Learning Theory - COLT 2008, Helsinki, Finland, July 9-12, 2008, pages 415-424, 2008. Google Scholar
  3. Jacob Abernethy, Manfred K. Warmuth, and Joel Yellin. When random play is optimal against an adversary. In COLT, pages 437-446, 2008. Google Scholar
  4. Avrim Blum and Adam Kalai. Universal portfolios with and without transaction costs. Machine Learning, 35(3):193-205, June 1999. Google Scholar
  5. Nicolò Cesa-Bianchi. Analysis of two gradient-based algorithms for on-line regression. In Proceedings of the Tenth Annual Conference on Computational Learning Theory, COLT'97, pages 163-170, New York, NY, USA, 1997. ACM. Google Scholar
  6. Nicolò Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. J. ACM, 44(3):427-485, May 1997. Google Scholar
  7. Nicolò Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. Google Scholar
  8. Kamalika Chaudhuri, Yoav Freund, and Daniel J. Hsu. A parameter-free hedging algorithm. In Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada., pages 297-305, 2009. Google Scholar
  9. Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119-139, August 1997. Google Scholar
  10. Yoav Freund and Robert E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29(1-2):79-103, 1999. Google Scholar
  11. Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Tight lower bounds for multiplicative weights algorithmic families, 2016. CoRR, abs/1607.02834. URL: http://arxiv.org/abs/1607.02834.
  12. Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Towards optimal algorithms for prediction with expert advice. In To appear in the Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, 2016. Google Scholar
  13. András Gyorgy, Dávid Pál, and Csaba Szepesvári. Online Learning: Algorithms for Big Data. Manuscript, 2013. Google Scholar
  14. James Hannan. Approximation to bayes risk in repeated play. Contributions to the Theory of Games, 3:97-139, 1957. Google Scholar
  15. David Haussler, Jyrki Kivinen, and Manfred K. Warmuth. Tight worst-case loss bounds for predicting with expert advice. In EuroCOLT, pages 69-83, 1995. Google Scholar
  16. Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic regret algorithms for online convex optimization. In Learning Theory, 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006, Proceedings, pages 499-513, 2006. Google Scholar
  17. Wouter M. Koolen. The pareto regret frontier. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 863-871, 2013. Google Scholar
  18. Wouter M. Koolen and Tim van Erven. Second-order quantile methods for experts and combinatorial games. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, pages 1155-1175, 2015. Google Scholar
  19. Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212-261, February 1994. Google Scholar
  20. Haipeng Luo and Robert E. Schapire. Towards minimax online learning with unknown time horizon. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 226-234, 2014. Google Scholar
  21. H. Brendan McMahan and Jacob Abernethy. Minimax optimal algorithms for unconstrained linear optimization. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 2724-2732, 2013. Google Scholar
  22. H. Brendan McMahan and Francesco Orabona. Unconstrained online linear learning in hilbert spaces: Minimax algorithms and normal approximations. In Proceedings of The 27th Conference on Learning Theory, COLT 2014, Barcelona, Spain, June 13-15, 2014, pages 1020-1039, 2014. Google Scholar
  23. Indraneel Mukherjee and Robert E. Schapire. Learning with continuous experts using drifting games. Theor. Comput. Sci., 411(29-30):2670-2683, 2010. Google Scholar
  24. Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize : From value to algorithms. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pages 2150-2158, 2012. Google Scholar
  25. Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages, combinatorial parameters, and learnability. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada., pages 1984-1992, 2010. Google Scholar
  26. Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Beyond regret. In COLT 2011 - The 24th Annual Conference on Learning Theory, June 9-11, 2011, Budapest, Hungary, pages 559-594, 2011. Google Scholar
  27. Volodimir G. Vovk. Aggregating strategies. In Proceedings of the Third Annual Workshop on Computational Learning Theory, COLT'90, pages 371-386, 1990. Google Scholar
  28. Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA, pages 928-936, 2003. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail