Abstract
Loss minimization is a dominant paradigm in machine learning, where a predictor is trained to minimize some loss function that depends on an uncertain event (e.g., "will it rain tomorrow?"). Different loss functions imply different learning algorithms and, at times, very different predictors. While widespread and appealing, a clear drawback of this approach is that the loss function may not be known at the time of learning, requiring the algorithm to use a bestguess loss function. Alternatively, the same classifier may be used to inform multiple decisions, which correspond to multiple loss functions, requiring multiple learning algorithms to be run on the same data. We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action.
We introduce the notion of an (L,π)omnipredictor, which could be used to optimize any loss in a family L. Once the loss function is set, the outputs of the predictor can be postprocessed (a simple univariate dataindependent transformation of individual predictions) to do well compared with any hypothesis from the class C. The post processing is essentially what one would perform if the outputs of the predictor were true probabilities of the uncertain events. In a sense, omnipredictors extract all the predictive power from the class π, irrespective of the loss function in L.
We show that such "lossoblivious" learning is feasible through a connection to multicalibration, a notion introduced in the context of algorithmic fairness. A multicalibrated predictor doesnβt aim to minimize some loss function, but rather to make calibrated predictions, even when conditioned on inputs lying in certain sets c belonging to a family π which is weakly learnable. We show that a πmulticalibrated predictor is also an (L,π)omnipredictor, where L contains all convex loss functions with some mild Lipschitz conditions. The predictors are even omnipredictors with respect to sparse linear combinations of functions in π. As a corollary, we deduce that distributionspecific weak agnostic learning is complete for a large class of loss minimization tasks.
In addition, we show how multicalibration can be viewed as a solution concept for agnostic boosting, shedding new light on past results. Finally, we transfer our insights back to the context of algorithmic fairness by providing omnipredictors for multigroup loss minimization.
BibTeX  Entry
@InProceedings{gopalan_et_al:LIPIcs.ITCS.2022.79,
author = {Gopalan, Parikshit and Kalai, Adam Tauman and Reingold, Omer and Sharan, Vatsal and Wieder, Udi},
title = {{Omnipredictors}},
booktitle = {13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
pages = {79:179:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {9783959772174},
ISSN = {18688969},
year = {2022},
volume = {215},
editor = {Braverman, Mark},
publisher = {Schloss Dagstuhl  LeibnizZentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2022/15675},
URN = {urn:nbn:de:0030drops156755},
doi = {10.4230/LIPIcs.ITCS.2022.79},
annote = {Keywords: Lossminimzation, multigroup fairness, agnostic learning, boosting}
}
Keywords: 

Lossminimzation, multigroup fairness, agnostic learning, boosting 
Collection: 

13th Innovations in Theoretical Computer Science Conference (ITCS 2022) 
Issue Date: 

2022 
Date of publication: 

25.01.2022 