HappyMap : A Generalized Multicalibration Method

Authors Zhun Deng, Cynthia Dwork, Linjun Zhang



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2023.41.pdf
  • Filesize: 0.88 MB
  • 23 pages

Document Identifiers

Author Details

Zhun Deng
  • Department of Computer Science, Columbia University, New York, NY, USA
Cynthia Dwork
  • Department of Computer Science, Harvard University, Cambridge, MA, USA
Linjun Zhang
  • Department of Statistics, Rutgers University, Piscataway, NJ, USA

Acknowledgements

We thank all the reviewers for their comments and suggestions. We are indebted to Aaron Roth for his insightful and valuable feedback, which greatly improved the paper.

Cite As Get BibTex

Zhun Deng, Cynthia Dwork, and Linjun Zhang. HappyMap : A Generalized Multicalibration Method. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 41:1-41:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023) https://doi.org/10.4230/LIPIcs.ITCS.2023.41

Abstract

Multicalibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class C, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in C. Fairness is captured by incorporating demographic subgroups into the class of functions C. Recent work has shown that, by enriching the class C to incorporate appropriate propensity re-weighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future, target domains {(approximately) captured by the re-weightings.}
Formally, multicalibration with respect to C bounds |𝔼_{(x,y)∼D}[c(f(x),x)⋅(f(x)-y)]| for all c ∈ C. In this work, we view the term (f(x)-y) as just one specific mapping, and explore the power of an enriched class of mappings. We propose s-Happy Multicalibration, a generalization of multi-calibration, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification, a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. 
We give a single HappyMap meta-algorithm that captures all these results, together with a sufficiency condition for its success.

Subject Classification

ACM Subject Classification
  • Theory of computation → Design and analysis of algorithms
  • Theory of computation → Theory and algorithms for application domains
Keywords
  • algorithmic fairness
  • target-independent learning
  • transfer learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint, 2021. URL: http://arxiv.org/abs/2107.07511.
  2. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint, 2019. URL: http://arxiv.org/abs/1907.02893.
  3. Osbert Bastani, Varun Gupta, Christopher Jung, Georgy Noarov, Ramya Ramalingam, and Aaron Roth. Practical adversarial multivalid conformal prediction. arXiv preprint, 2022. URL: http://arxiv.org/abs/2206.01067.
  4. Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. Foundations and Trendsregistered in Machine Learning, 8(3-4):231-357, 2015. Google Scholar
  5. Maya Burhanpurkar, Zhun Deng, Cynthia Dwork, and Linjun Zhang. Scaffolding sets. arXiv preprint, 2021. URL: http://arxiv.org/abs/2111.03135.
  6. Andreas Christmann and Ingo Steinwart. How svms can estimate quantiles and the median. Advances in neural information processing systems, 20, 2007. Google Scholar
  7. A Philip Dawid. The well-calibrated bayesian. Journal of the American Statistical Association, 77(379):605-610, 1982. Google Scholar
  8. Zhun Deng, Frances Ding, Cynthia Dwork, Rachel Hong, Giovanni Parmigiani, Prasad Patil, and Pragya Sur. Representation via representations: Domain generalization via adversarially learned invariant representations. arXiv preprint, 2020. URL: http://arxiv.org/abs/2006.11478.
  9. Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, and Aaron Roth. Minimax group fairness: Algorithms and experiments. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 66-76, 2021. Google Scholar
  10. Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toni Pitassi, Omer Reingold, and Aaron Roth. Generalization in adaptive data analysis and holdout reuse. Advances in Neural Information Processing Systems, 28, 2015. Google Scholar
  11. Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Roth. The reusable holdout: Preserving validity in adaptive data analysis. Science, 349(6248):636-638, 2015. Google Scholar
  12. Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Leon Roth. Preserving statistical validity in adaptive data analysis. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pages 117-126, 2015. Google Scholar
  13. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214-226, 2012. Google Scholar
  14. Cynthia Dwork, Michael P Kim, Omer Reingold, Guy N Rothblum, and Gal Yona. Learning from outcomes: Evidence-based rankings. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 106-125. IEEE, 2019. Google Scholar
  15. Cynthia Dwork, Michael P Kim, Omer Reingold, Guy N Rothblum, and Gal Yona. Outcome indistinguishability. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 1095-1108, 2021. Google Scholar
  16. Dean P Foster and Rakesh V Vohra. Asymptotic calibration. Biometrika, 85(2):379-390, 1998. Google Scholar
  17. Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani. The limits of distribution-free conditional predictive inference. Information and Inference: A Journal of the IMA, 10(2):455-482, 2021. Google Scholar
  18. Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder. Omnipredictors. arXiv preprint, 2021. URL: http://arxiv.org/abs/2109.05389.
  19. Parikshit Gopalan, Michael P Kim, Mihir Singhal, and Shengjia Zhao. Low-degree multicalibration. arXiv preprint, 2022. URL: http://arxiv.org/abs/2203.01255.
  20. Varun Gupta, Christopher Jung, Georgy Noarov, Mallesh M Pai, and Aaron Roth. Online multivalid learning: Means, moments, and prediction intervals. arXiv preprint, 2021. URL: http://arxiv.org/abs/2101.01739.
  21. Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning, pages 1939-1948. PMLR, 2018. Google Scholar
  22. Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning, pages 1939-1948. PMLR, 2018. Google Scholar
  23. Christopher Jung, Changhwa Lee, Mallesh Pai, Aaron Roth, and Rakesh Vohra. Moment multicalibration for uncertainty estimation. In Conference on Learning Theory, pages 2634-2678. PMLR, 2021. Google Scholar
  24. Christopher Jung, Georgy Noarov, Ramya Ramalingam, and Aaron Roth. Batch multivalid conformal prediction. arXiv preprint, 2022. URL: http://arxiv.org/abs/2209.15145.
  25. Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning, pages 2564-2572. PMLR, 2018. Google Scholar
  26. Michael P Kim, Amirata Ghorbani, and James Zou. Multiaccuracy: Black-box post-processing for fairness in classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 247-254, 2019. Google Scholar
  27. Michael P Kim, Christoph Kern, Shafi Goldwasser, Frauke Kreuter, and Omer Reingold. Universal adaptability: Target-independent inference that competes with propensity scoring. Proceedings of the National Academy of Sciences, 119(4):e2108097119, 2022. Google Scholar
  28. Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. arXiv preprint, 2016. URL: http://arxiv.org/abs/1609.05807.
  29. Jing Lei and Larry Wasserman. Distribution-free prediction bands for non-parametric regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1):71-96, 2014. Google Scholar
  30. Lihua Lei and Emmanuel J Candès. Conformal inference of counterfactuals and individual treatment effects. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2021. Google Scholar
  31. Lingling Li, Changyu Shen, Xiaochun Li, and James M Robins. On weighting approaches for missing data. Statistical methods in medical research, 22(1):14-30, 2013. Google Scholar
  32. Yaniv Romano, Rina Foygel Barber, Chiara Sabatti, and Emmanuel J Candès. With malice towards none: Assessing uncertainty via equalized coverage. arXiv preprint, 2019. URL: http://arxiv.org/abs/1908.05428.
  33. Yaniv Romano, Evan Patterson, and Emmanuel Candes. Conformalized quantile regression. Advances in Neural Information Processing Systems, 32:3543-3553, 2019. Google Scholar
  34. Alvaro Sandroni, Rann Smorodinsky, and Rakesh V Vohra. Calibration with many checking rules. Mathematics of operations Research, 28(1):141-153, 2003. Google Scholar
  35. Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research, 9(3), 2008. Google Scholar
  36. Ingo Steinwart and Andreas Christmann. Estimating conditional quantiles with the help of the pinball loss. Bernoulli, 17(1):211-225, 2011. Google Scholar
  37. Ryan J Tibshirani, Rina Foygel Barber, Emmanuel Candes, and Aaditya Ramdas. Conformal prediction under covariate shift. Advances in neural information processing systems, 32, 2019. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail