LIPIcs.ESA.2021.2.pdf
- Filesize: 299 kB
- 1 pages
In this talk, we overview a simple and user friendly framework developed in [Noarov et al., 2021] that can be used to derive online learning algorithms in a number of settings. In the core framework, at every round, an adaptive adversary introduces a new game, consisting of an action space for the learner, an action space for the adversary, and a vector valued objective function that is concave-convex in every coordinate. The learner and the adversary then play in this game. The learner’s goal is to play so as to minimize the maximum coordinate of the cumulative vector-valued loss. The resulting one-shot game is not concave-convex, and so the minimax theorem does not apply. Nevertheless we give a simple algorithm that can compete with the setting in which the adversary must announce their action first, with optimally diminishing regret. We demonstrate the power of our simple framework by using it to derive optimal bounds and algorithms across a variety of domains. This includes no regret learning: we can recover optimal algorithms and bounds for minimizing exernal regret, internal regret, adaptive regret, multigroup regret, subsequence regret, and permutation regret in the sleeping experts setting. It also includes (multi)calibration [Hébert-Johnson et al., 2018] and related notions: we are able to recover recently derived algorithms and bounds for online adversarial multicalibration [Gupta et al., 2021], mean conditioned moment multicalibration [Jung et al., 2021], and prediction interval multivalidity [Gupta et al., 2021]. Finally we use it to derive a new variant of Blackwell’s Approachability Theorem, which we term "Fast Polytope Approachability".
Feedback for Dagstuhl Publishing