Inherent Trade-Offs in the Fair Determination of Risk Scores

Authors Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2017.43.pdf
  • Filesize: 491 kB
  • 23 pages

Document Identifiers

Author Details

Jon Kleinberg
Sendhil Mullainathan
Manish Raghavan

Cite As Get BibTex

Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 43:1-43:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017) https://doi.org/10.4230/LIPIcs.ITCS.2017.43

Abstract

Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.

Subject Classification

Keywords
  • algorithmic fairness
  • risk tools
  • calibration

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, May 23, 2016. URL: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  2. Solon Barocas and Andrew D Selbst. Big data’s disparate impact. California Law Review, 104, 2016. Google Scholar
  3. Toon Calders and Sicco Verwer. Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2):277-292, 2010. Google Scholar
  4. Cynthia S. Crowson, Elizabeth J. Atkinson, and Terry M. Therneau. Assessing calibration of prognostic risk scores. Statistical Methods in Medical Research, 25(4):1692-1706, 2016. Google Scholar
  5. Amit Datta, Michael Carl Tschantz, and Anupam Datta. Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1):92-112, 2015. URL: http://www.degruyter.com/view/j/popets.2015.1.issue-1/popets-2015-0007/popets-2015-0007.xml.
  6. William Dieterich, Christina Mendoza, and Tim Brennan. COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Technical report, Northpointe, July 2016. URL: http://www.northpointeinc.com/northpointe-analysis.
  7. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science 2012, Cambridge, MA, USA, January 8-10, 2012, pages 214-226, 2012. URL: http://dx.doi.org/10.1145/2090236.2090255.
  8. Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15, pages 259-268, New York, NY, USA, 2015. ACM. URL: http://dx.doi.org/10.1145/2783258.2783311.
  9. Anthony Flores, Christopher Lowenkamp, and Kristin Bechtel. False positives, false negatives, and false analyses: A rejoinder to "machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks.". Technical report, Crime &Justice Institute, September 2016. URL: http://www.crj.org/cji/entry/false-positives-false-negatives-and-false-analyses-a-rejoinder.
  10. Dean P. Foster and Rakesh V. Vohra. Asymptotic calibration. Biometrika, 85(2):379-390, 1998. Google Scholar
  11. Howard N. Garb. Race bias, social class bias, and gender bias in clinical judgment. Clinical Psychology: Science and Practice, 4(2):99-120, 1997. Google Scholar
  12. Abe Gong. Ethics for powerful algorithms (1 of 4). Medium, July 12, 2016. URL: https://medium.com/@AbeGong/ethics-for-powerful-algorithms-1-of-3-a060054efd84#.dhsd2ut3i.
  13. Faisal Kamiran and Toon Calders. Classifying without discriminating. 2009 2nd International Conference on Computer, Control and Communication, IC4 2009, 2009. URL: http://dx.doi.org/10.1109/IC4.2009.4909197.
  14. Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. Proceedings - IEEE International Conference on Data Mining, ICDM, pages 643-650, 2011. URL: http://dx.doi.org/10.1109/ICDMW.2011.83.
  15. Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How we analyzed the COMPAS recidivism algorithm. ProPublica, May 23, 2016. URL: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
  16. Executive Office of the President. Big data: A report on algorithmic systems, opportunity, and civil rights. Technical report, The White House, Washington, USA, May 2016. Google Scholar
  17. Propublica analysis. URL: https://docs.google.com/document/d/1pKtyl8XmJH7Z09lxkb70n6fa2Fiitd7ydbxgCT_wCXs/edit?pref=2&pli=1.
  18. Latanya Sweeney. Discrimination in online ad delivery. Communications of the ACM, 56(5):44-54, 2013. URL: http://dx.doi.org/10.1145/2447976.2447990.
  19. David R. Williams and Selina A. Mohammed. Discrimination and racial disparities in health: Evidence and needed research. J. Med. Behav., 32(1), 2009. Google Scholar
  20. Richard S Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. Learning Fair Representations. Proceedings of the 30th International Conference on Machine Learning, 28:325-333, 2013. URL: http://jmlr.org/proceedings/papers/v28/zemel13.html.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail