Multiplicative Metric Fairness Under Composition

Author Milan Mossé



PDF
Thumbnail PDF

File

LIPIcs.FORC.2023.4.pdf
  • Filesize: 0.56 MB
  • 11 pages

Document Identifiers

Author Details

Milan Mossé
  • Department of Philosophy, University of California at Berkeley, CA, USA

Acknowledgements

Many thanks to Omer Reingold and Li-Yang Tan for their generous guidance and support with this project. Thanks to James Evershed, Wes Holliday, Niko Kolodny, Gabrielle Candès, and three anonymous reviewers for helpful comments.

Cite As Get BibTex

Milan Mossé. Multiplicative Metric Fairness Under Composition. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 4:1-4:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023) https://doi.org/10.4230/LIPIcs.FORC.2023.4

Abstract

Dwork, Hardt, Pitassi, Reingold, & Zemel [Dwork et al., 2012] introduced two notions of fairness, each of which is meant to formalize the notion of similar treatment for similarly qualified individuals. The first of these notions, which we call additive metric fairness, has received much attention in subsequent work studying the fairness of a system composed of classifiers which are fair when considered in isolation [Chawla and Jagadeesan, 2020; Chawla et al., 2022; Dwork and Ilvento, 2018; Dwork et al., 2020; Ilvento et al., 2020] and in work studying the relationship between fair treatment of individuals and fair treatment of groups [Dwork et al., 2012; Dwork and Ilvento, 2018; Kim et al., 2018]. Here, we extend these lines of research to the second, less-studied notion, which we call multiplicative metric fairness. In particular, we exactly characterize the fairness of conjunctions and disjunctions of multiplicative metric fair classifiers, and the extent to which a classifier which satisfies multiplicative metric fairness also treats groups fairly. This characterization reveals that whereas additive metric fairness becomes easier to satisfy when probabilities of acceptance are small, leading to unfairness under functional and group compositions, multiplicative metric fairness is better-behaved, due to its scale-invariance.

Subject Classification

ACM Subject Classification
  • Mathematics of computing → Probability and statistics
Keywords
  • algorithmic fairness
  • metric fairness
  • fairness under composition

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Reuben Binns. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 514-524, 2020. Google Scholar
  2. Amanda Bower, Sarah N Kitchen, Laura Niss, Martin J Strauss, Alexander Vargas, and Suresh Venkatasubramanian. Fair pipelines. arXiv preprint, 2017. URL: https://arxiv.org/abs/1707.00391.
  3. Shuchi Chawla and Meena Jagadeesan. Fairness in ad auctions through inverse proportionality. arXiv preprint, 2020. URL: https://arxiv.org/abs/2003.13966.
  4. Shuchi Chawla, Rojin Rezvan, and Nathaniel Sauerberg. Individually-fair auctions for multi-slot sponsored search. In 3rd Symposium on Foundations of Responsible Computing, page 1, 2022. Google Scholar
  5. Alexandra Chouldechova and Aaron Roth. The frontiers of fairness in machine learning. arXiv preprint, 2018. URL: https://arxiv.org/abs/1810.08810.
  6. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214-226, 2012. Google Scholar
  7. Cynthia Dwork and Christina Ilvento. Fairness under composition. arXiv preprint, 2018. URL: https://arxiv.org/abs/1806.06122.
  8. Cynthia Dwork, Christina Ilvento, and Meena Jagadeesan. Individual fairness in pipelines. arXiv preprint, 2020. URL: https://arxiv.org/abs/2004.05167.
  9. Vitalii Emelianov, George Arvanitakis, Nicolas Gast, Krishna Gummadi, and Patrick Loiseau. The price of local fairness in multistage selection. arXiv preprint, 2019. URL: https://arxiv.org/abs/1906.06613.
  10. Swati Gupta and Vijay Kamble. Individual fairness in hindsight. In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 805-806, 2019. Google Scholar
  11. Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315-3323, 2016. Google Scholar
  12. Christina Ilvento, Meena Jagadeesan, and Shuchi Chawla. Multi-category fairness in sponsored search auctions. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 348-358, 2020. Google Scholar
  13. Michael P Kim, Omer Reingold, and Guy N Rothblum. Fairness through computationally-bounded awareness. arXiv preprint, 2018. URL: https://arxiv.org/abs/1803.03239.
  14. Ya'acov Ritov, Yuekai Sun, and Ruofei Zhao. On conditional parity as a notion of non-discrimination in machine learning. arXiv preprint, 2017. URL: https://arxiv.org/abs/1706.08519.
  15. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics, pages 962-970. PMLR, 2017. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail