Calibration requires predictor outputs to be consistent with their Bayesian posteriors. For machine learning predictors that do not distinguish between small perturbations, calibration errors are continuous in predictions, e.g. smooth calibration error [Foster and Hart, 2018], distance to calibration [Błasiok et al., 2023]. On the contrary, decision-makers who use predictions make optimal decisions discontinuously in probabilistic space, experiencing loss from miscalibration discontinuously. Calibration errors for decision-making are thus discontinuous, e.g., Expected Calibration Error [Foster and Vohra, 1997], and Calibration Decision Loss [Hu and Wu, 2024]. Thus, predictors with a low calibration error for machine learning may suffer a high calibration error for decision-making, i.e. they may not be trustworthy for decision-makers optimizing assuming their predictions are correct. It is natural to ask if post-processing a predictor with a low calibration error for machine learning is without loss to achieve a low calibration error for decision-making. In our paper, we show post-processing an online predictor with ε distance to calibration achieves O(√{ε}) ECE and CDL, which is asymptotically optimal. The post-processing algorithm adds noise to make predictions differentially private. The optimal bound from low distance to calibration predictors from post-processing is non-optimal compared with existing online calibration algorithms that directly optimize for ECE and CDL.
@InProceedings{hartline_et_al:LIPIcs.FORC.2025.16, author = {Hartline, Jason and Wu, Yifan and Yang, Yunran}, title = {{Smooth Calibration and Decision Making}}, booktitle = {6th Symposium on Foundations of Responsible Computing (FORC 2025)}, pages = {16:1--16:26}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-367-6}, ISSN = {1868-8969}, year = {2025}, volume = {329}, editor = {Bun, Mark}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.16}, URN = {urn:nbn:de:0030-drops-231438}, doi = {10.4230/LIPIcs.FORC.2025.16}, annote = {Keywords: Calibration, calibration errors, decision making, differential privacy} }
Feedback for Dagstuhl Publishing