Document Open Access Logo

A Possibility in Algorithmic Fairness: Can Calibration and Equal Error Rates Be Reconciled?

Authors Claire Lazar Reich, Suhas Vijaykumar



PDF
Thumbnail PDF

File

LIPIcs.FORC.2021.4.pdf
  • Filesize: 0.79 MB
  • 21 pages

Document Identifiers

Author Details

Claire Lazar Reich
  • MIT Statistics Center and Department of Economics, Cambridge, MA, USA
Suhas Vijaykumar
  • MIT Statistics Center and Department of Economics, Cambridge, MA, USA

Acknowledgements

Many thanks to Anna Mikusheva, Iván Werning, and David Autor for their valuable advice. We're also deeply grateful for the support of Ben Deaner, Lou Crandall, Pari Sastry, Tom Brennan, Jim Poterba, Rachael Meager, and Frank Schilbach with whom we have had energizing and productive conversations. Thank you to Deborah Plana, Pooya Molavi, Adam Fisch, and Yonadav Shavit for commenting on the manuscript at its advanced stages.

Cite AsGet BibTex

Claire Lazar Reich and Suhas Vijaykumar. A Possibility in Algorithmic Fairness: Can Calibration and Equal Error Rates Be Reconciled?. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 4:1-4:21, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)
https://doi.org/10.4230/LIPIcs.FORC.2021.4

Abstract

Decision makers increasingly rely on algorithmic risk scores to determine access to binary treatments including bail, loans, and medical interventions. In these settings, we reconcile two fairness criteria that were previously shown to be in conflict: calibration and error rate equality. In particular, we derive necessary and sufficient conditions for the existence of calibrated scores that yield classifications achieving equal error rates at any given group-blind threshold. We then present an algorithm that searches for the most accurate score subject to both calibration and minimal error rate disparity. Applied to the COMPAS criminal risk assessment tool, we show that our method can eliminate error disparities while maintaining calibration. In a separate application to credit lending, we compare our procedure to the omission of sensitive features and show that it raises both profit and the probability that creditworthy individuals receive loans.

Subject Classification

ACM Subject Classification
  • Mathematics of computing → Probability and statistics
  • Social and professional topics → Computing / technology policy
  • Computing methodologies → Supervised learning
Keywords
  • fair prediction
  • impossibility results
  • screening decisions
  • classification
  • calibration
  • equalized odds
  • optimal transport
  • risk scores

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Julia Angwin and Jeff Larson. Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say. ProPublica, 2016. Google Scholar
  2. Julia Angwin and Jeff Larson. Machine Bias. ProPublica, 2016. Google Scholar
  3. Solon Barocas and Andrew D. Selbst. Big data’s disparate impact. California Law Review, 104(3):671-732, 2016. Google Scholar
  4. Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, page 0049124118782533, 2018. Google Scholar
  5. Alexandra Chouldechova. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, 5(2):153-163, 2017. URL: https://doi.org/10.1089/big.2016.0047.
  6. Sam Corbett-Davies and Sharad Goel. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv:1808.00023 [cs], 2018. URL: http://arxiv.org/abs/1808.00023.
  7. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '17, pages 797-806, New York, NY, USA, 2017. Association for Computing Machinery. URL: https://doi.org/10.1145/3097983.3098095.
  8. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS '12, pages 214-226, New York, NY, USA, 2012. Association for Computing Machinery. URL: https://doi.org/10.1145/2090236.2090255.
  9. Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Max Leiserson. Decoupled classifiers for group-fair and efficient machine learning. In Sorelle A. Friedler and Christo Wilson, editors, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 119-133, New York, NY, USA, February 2018. PMLR. Google Scholar
  10. Avi Feller, Emma Pierson, Sam Corbett-Davies, and Sharad Goel. A computer program used for bail and sentencing decisions was labeled biased against blacks. it’s actually not that clear. The Washington Post, 17, 2016. Google Scholar
  11. Sumegha Garg, Michael P. Kim, and Omer Reingold. Tracking and improving information in the service of fairness. In Proceedings of the 2019 ACM Conference on Economics and Computation, EC '19, page 809–824, New York, NY, USA, 2019. Association for Computing Machinery. URL: https://doi.org/10.1145/3328526.3329624.
  12. Moritz Hardt, Eric Price, Eric Price, and Nati Srebro. Equality of Opportunity in Supervised Learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3315-3323. Curran Associates, Inc., 2016. Google Scholar
  13. Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. Algorithmic Fairness. AEA Papers and Proceedings, 108:22-27, 2018. URL: https://doi.org/10.1257/pandp.20181018.
  14. Jon Kleinberg and Sendhil Mullainathan. Simplicity creates inequity: Implications for fairness, stereotypes, and interpretability. In Proceedings of the 2019 ACM Conference on Economics and Computation, EC '19, pages 807-808, New York, NY, USA, 2019. Association for Computing Machinery. URL: https://doi.org/10.1145/3328526.3329621.
  15. Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. In Christos H. Papadimitriou, editor, 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, volume 67 of LIPIcs, pages 43:1-43:23. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. URL: https://doi.org/10.4230/LIPIcs.ITCS.2017.43.
  16. Jeff Larson. Data and analysis for "Machine bias". GitHub, June 2017. Google Scholar
  17. Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How We Analyzed the COMPAS Recidivism Algorithm. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm. Google Scholar
  18. Marc Mezard and Andrea Montanari. Information, Physics, and Computation. Oxford University Press, Inc., USA, 2009. Google Scholar
  19. Shira Mitchell, Eric Potash, Solon Barocas, Alexander D'Amour, and Kristian Lum. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8(1):141-163, 2021. URL: https://doi.org/10.1146/annurev-statistics-042720-125902.
  20. Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, and Moritz Hardt. Performative prediction. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 7599-7609. PMLR, 13-18 July 2020. URL: http://proceedings.mlr.press/v119/perdomo20a.html.
  21. Gabriel Peyré, Marco Cuturi, et al. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6):355-607, 2019. Google Scholar
  22. Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On Fairness and Calibration. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5680-5689. Curran Associates, Inc., 2017. Google Scholar
  23. Claire Lazar Reich and Suhas Vijaykumar. A possibility in algorithmic fairness: Calibrated scores for fair classifications, 2020. URL: http://arxiv.org/abs/2002.07676.
  24. Yonadav Shavit, Benjamin Edelman, and Brian Axelrod. Causal strategic linear regression. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 8676-8686. PMLR, 13-18 July 2020. URL: http://proceedings.mlr.press/v119/shavit20a.html.
  25. U.S. Census Bureau. Survey of income and program participation, 2014. URL: https://www.census.gov/programs-surveys/sipp/data/datasets.html.
  26. Cédric Villani. Optimal Transport: Old and New, volume 338. Springer Science & Business Media, 2008. Google Scholar
  27. Marten Wegkamp. Model selection in nonparametric regression. The Annals of Statistics, 31(1):252-273, 2003. URL: https://doi.org/10.1214/aos/1046294464.
  28. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, WWW '17, pages 1171-1180, Republic and Canton of Geneva, CHE, 2017. International World Wide Web Conferences Steering Committee. URL: https://doi.org/10.1145/3038912.3052660.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail