Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

Authors Avrim Blum , Kevin Stangl



PDF
Thumbnail PDF

File

LIPIcs.FORC.2020.3.pdf
  • Filesize: 0.58 MB
  • 20 pages

Document Identifiers

Author Details

Avrim Blum
  • Toyota Technological Institute at Chicago, 6045 South Kenwood Avenue, Chicago, IL, 60637, USA
Kevin Stangl
  • Toyota Technological Institute at Chicago, 6045 South Kenwood Avenue, Chicago, IL, 60637, USA

Acknowledgements

We would like to thank Jon Kleinberg and Manish Raghavan for their helpful and insightful comments on an earlier draft of this manuscript.

Cite As Get BibTex

Avrim Blum and Kevin Stangl. Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020) https://doi.org/10.4230/LIPIcs.FORC.2020.3

Abstract

Multiple fairness constraints have been proposed in the literature, motivated by a range of concerns about how demographic groups might be treated unfairly by machine learning classifiers. In this work we consider a different motivation; learning from biased training data. We posit several ways in which training data may be biased, including having a more noisy or negatively biased labeling process on members of a disadvantaged group, or a decreased prevalence of positive or negative examples from the disadvantaged group, or both. Given such biased training data, Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution. We examine the ability of fairness-constrained ERM to correct this problem. In particular, we find that the Equal Opportunity fairness constraint [Hardt et al., 2016] combined with ERM will provably recover the Bayes optimal classifier under a range of bias models. We also consider other recovery methods including re-weighting the training data, Equalized Odds, and Demographic Parity, and Calibration. These theoretical results provide additional motivation for considering fairness interventions even if an actor cares primarily about accuracy.

Subject Classification

ACM Subject Classification
  • Theory of computation → Machine learning theory
Keywords
  • fairness in machine learning
  • equal opportunity
  • bias
  • machine learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Dana Angluin and Philip Laird. Learning From Noisy Examples. Machine Learning, 2(4):343-370, April 1988. URL: https://doi.org/10.1007/BF00116829.
  2. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. ProPublica, May, 23:2016, 2016. Google Scholar
  3. Marianne Bertrand and Sendhil Mullainathan. Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. American Economic Review, 94(4):991-1013, 2004. Google Scholar
  4. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Advances in Neural Information Processing Systems, pages 4349-4357, 2016. Google Scholar
  5. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Conference on Fairness, Accountability and Transparency, pages 77-91, 2018. Google Scholar
  6. Alexandra Chouldechova. Fair Prediction With Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, 5(2):153-163, 2017. Google Scholar
  7. Danielle Keats Citron and Frank Pasquale. The Scored Society: Due Process for Automated Predictions. Wash. L. Rev., 89:1, 2014. Google Scholar
  8. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 797-806. ACM, 2017. Google Scholar
  9. Maria De-Arteaga, Artur Dubrawski, and Alexandra Chouldechova. Learning under selective labels in the presence of expert consistency. arXiv preprint, 2018. URL: http://arxiv.org/abs/1807.00905.
  10. William Dieterich, Christina Mendoza, and Tim Brennan. Compas risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc, 2016. Google Scholar
  11. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. Fairness Through Awareness. In Innovations in Theoretical Computer Science 2012, Cambridge, MA, USA, January 8-10, 2012, pages 214-226, 2012. URL: https://doi.org/10.1145/2090236.2090255.
  12. Anthony W Flores, Kristin Bechtel, and Christopher T Lowenkamp. False Positives, False Negatives, and False Analyses: A Rejoinder to Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks. Fed. Probation, 80:38, 2016. Google Scholar
  13. Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. On the (im)possibility of fairness. CoRR, abs/1609.07236, 2016. URL: http://arxiv.org/abs/1609.07236.
  14. Moritz Hardt, Eric Price, and Nati Srebro. Equality of Opportunity in Supervised Learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3315-3323. Curran Associates, Inc., 2016. URL: http://papers.nips.cc/paper/6374-equality-of-opportunity-in-supervised-learning.pdf.
  15. Heinrich Jiang and Ofir Nachum. Identifying and Correcting Label Bias in Machine Learning. CoRR, abs/1901.04966, 2019. URL: http://arxiv.org/abs/1901.04966.
  16. Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, pages 43:1-43:23, 2017. URL: https://doi.org/10.4230/LIPIcs.ITCS.2017.43.
  17. Jon M. Kleinberg and Manish Raghavan. Selection Problems in the Presence of Implicit Bias. In 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, pages 33:1-33:17, 2018. URL: https://doi.org/10.4230/LIPIcs.ITCS.2018.33.
  18. Himabindu Lakkaraju, Jon Kleinberg, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 275-284. ACM, 2017. Google Scholar
  19. Kristian Lum and William Isaac. To predict and serve? Significance, 13(5):14-19, 2016. Google Scholar
  20. Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On Fairness and Calibration. In Advances in Neural Information Processing Systems, pages 5680-5689, 2017. Google Scholar
  21. Rashida Richardson, Jason Schultz, and Kate Crawford. Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review Online, Forthcoming, 2019. Google Scholar
  22. Samuel Yeom and Michael Carl Tschantz. Discriminative but Not Discriminatory: A Comparison of Fairness Definitions under Different Worldviews. arXiv preprint, 2018. URL: http://arxiv.org/abs/1808.08619.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail