Fairness Under Composition

Authors Cynthia Dwork, Christina Ilvento



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2019.33.pdf
  • Filesize: 0.59 MB
  • 20 pages

Document Identifiers

Author Details

Cynthia Dwork
  • Harvard John A Paulson School of Engineering and Applied Science, Radcliffe Institute for Advanced Study, Cambridge, MA, USA
Christina Ilvento
  • Harvard John A Paulson School of Engineering and Applied Science, Cambridge, MA, USA

Cite AsGet BibTex

Cynthia Dwork and Christina Ilvento. Fairness Under Composition. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 33:1-33:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
https://doi.org/10.4230/LIPIcs.ITCS.2019.33

Abstract

Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law. Much of the literature considers the case of a single classifier (or scoring function) used once, in isolation. In this work, we initiate the study of the fairness properties of systems composed of algorithms that are fair in isolation; that is, we study fairness under composition. We identify pitfalls of naïve composition and give general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. We focus primarily on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], but also extend our results to a large class of group fairness definitions popular in the recent literature, exhibiting several cases in which group fairness definitions give misleading signals under composition.

Subject Classification

ACM Subject Classification
  • Theory of computation → Computational complexity and cryptography
  • Theory of computation → Design and analysis of algorithms
  • Theory of computation → Theory and algorithms for application domains
Keywords
  • algorithmic fairness
  • fairness
  • fairness under composition

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Amanda Bower, Sarah N. Kitchen, Laura Niss, Martin J. Strauss, Alexander Vargas, and Suresh Venkatasubramanian. Fair Pipelines. CoRR, abs/1707.00391, 2017. URL: http://arxiv.org/abs/1707.00391.
  2. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv preprint, 2017. URL: http://arxiv.org/abs/1703.00056.
  3. Amit Datta, Michael Carl Tschantz, and Anupam Datta. Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1):92-112, 2015. Google Scholar
  4. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214-226. ACM, 2012. Google Scholar
  5. Stephen Gillen, Christopher Jung, Michael Kearns, and Aaron Roth. Online Learning with an Unknown Fairness Metric. arXiv preprint, 2018. URL: http://arxiv.org/abs/1802.06936.
  6. Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pages 3315-3323, 2016. Google Scholar
  7. Ursula Hébert-Johnson, Michael P Kim, Omer Reingold, and Guy N Rothblum. Calibration for the (Computationally-Identifiable) Masses. arXiv preprint, 2017. URL: http://arxiv.org/abs/1711.08513.
  8. Lily Hu and Yiling Chen. Fairness at Equilibrium in the Labor Market. CoRR, abs/1707.01590, 2017. URL: http://arxiv.org/abs/1707.01590.
  9. Faisal Kamiran and Toon Calders. Classifying without discriminating. In Computer, Control and Communication, 2009. IC4 2009. 2nd International Conference on, pages 1-6. IEEE, 2009. Google Scholar
  10. Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on, pages 643-650. IEEE, 2011. Google Scholar
  11. Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. arXiv preprint, 2017. URL: http://arxiv.org/abs/1711.05144.
  12. Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding Discrimination through Causal Reasoning. arXiv preprint, 2017. URL: http://arxiv.org/abs/1706.02744.
  13. Michael P Kim, Omer Reingold, and Guy N Rothblum. Fairness Through Computationally-Bounded Awareness. arXiv preprint, 2018. URL: http://arxiv.org/abs/1803.03239.
  14. Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. CoRR, abs/1609.05807, 2016. URL: http://arxiv.org/abs/1609.05807.
  15. Peter Kuhn and Kailing Shen. Gender discrimination in job ads: Evidence from china. The Quarterly Journal of Economics, 128(1):287-336, 2012. Google Scholar
  16. Matt J Kusner, Joshua R Loftus, Chris Russell, and Ricardo Silva. Counterfactual Fairness. arXiv preprint, 2017. URL: http://arxiv.org/abs/1703.06856.
  17. Anja Lambrecht and Catherine E Tucker. Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads, 2016. Google Scholar
  18. Lydia T Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed Impact of Fair Machine Learning. arXiv preprint, 2018. URL: http://arxiv.org/abs/1803.04383.
  19. David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning Adversarially Fair and Transferable Representations. arXiv preprint, 2018. URL: http://arxiv.org/abs/1802.06309.
  20. Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 560-568. ACM, 2008. Google Scholar
  21. Ya'acov Ritov, Yuekai Sun, and Ruofei Zhao. On conditional parity as a notion of non-discrimination in machine learning. arXiv preprint, 2017. URL: http://arxiv.org/abs/1706.08519.
  22. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 325-333, 2013. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail