Causal Intersectionality and Fair Ranking

Authors Ke Yang, Joshua R. Loftus, Julia Stoyanovich

Thumbnail PDF


  • Filesize: 1.72 MB
  • 20 pages

Document Identifiers

Author Details

Ke Yang
  • New York University, NY, USA
Joshua R. Loftus
  • London School of Economics, UK
Julia Stoyanovich
  • New York University, NY, USA

Cite AsGet BibTex

Ke Yang, Joshua R. Loftus, and Julia Stoyanovich. Causal Intersectionality and Fair Ranking. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


In this paper we propose a causal modeling approach to intersectional fairness, and a flexible, task-specific method for computing intersectionally fair rankings. Rankings are used in many contexts, ranging from Web search to college admissions, but causal inference for fair rankings has received limited attention. Additionally, the growing literature on causal fairness has directed little attention to intersectionality. By bringing these issues together in a formal causal framework we make the application of intersectionality in algorithmic fairness explicit, connected to important real world effects and domain knowledge, and transparent about technical limitations. We experimentally evaluate our approach on real and synthetic datasets, exploring its behavior under different structural assumptions.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Ranking
  • fairness
  • intersectionality
  • ranking
  • causality


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, 23, 2016. Google Scholar
  2. Abolfazl Asudeh, H. V. Jagadish, Julia Stoyanovich, and Gautam Das. Designing fair ranking schemes. In ACM SIGMOD, pages 1259-1276, 2019. URL:
  3. Giorgio Barnabò, Carlos Castillo, Michael Mathioudakis, and Sergio Celis. Intersectional affirmative action policies for top-k candidates selection. CoRR, abs/2007.14775, 2020. URL:
  4. Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. NIPS Tutorial, 2017. Google Scholar
  5. Emery Berger. CSRankings: Computer Science Rankings, 2017-2020. Online, retrieved June 2, 2020. URL:
  6. Liam Kofi Bright, Daniel Malinsky, and Morgan Thompson. Causally interpreting intersectionality theory. Philosophy of Science, 83(1):60-81, 2016. Google Scholar
  7. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pages 77-91, 2018. Google Scholar
  8. L. Elisa Celis, Anay Mehrotra, and Nisheeth K. Vishnoi. Interventions for ranking in the presence of implicit bias. In FAT*, pages 369-380. ACM, 2020. URL:
  9. L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. Ranking with fairness constraints. In ICALP, volume 107 of LIPIcs, pages 28:1-28:15, 2018. URL:
  10. Silvia Chiappa. Path-specific counterfactual fairness. In AAAI, volume 33, pages 7801-7808, 2019. Google Scholar
  11. Alexandra Chouldechova and Aaron Roth. A snapshot of the frontiers of fairness in machine learning. Commun. ACM, 63(5):82-89, 2020. URL:
  12. Patricia Hill Collins. Black feminist thought: Knowledge, consciousness, and the politics of empowerment. routledge, 2002. Google Scholar
  13. Combahee River Collective. The Combahee river collective statement. Home girls: A Black feminist anthology, pages 264-74, 1983. Google Scholar
  14. Kimberle Crenshaw. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, page 139, 1989. Google Scholar
  15. Anupam Datta, Shayak Sen, and Yair Zick. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In IEEE SP, pages 598-617, 2016. URL:
  16. Catherine D'Ignazio and Lauren F Klein. Data feminism. MIT Press, 2020. Google Scholar
  17. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 259-268, 2015. Google Scholar
  18. James R Foulds, Rashidul Islam, Kamrun Naher Keya, and Shimei Pan. An intersectional definition of fairness. In IEEE ICDE, pages 1918-1921, 2020. Google Scholar
  19. Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In NIPS, pages 3315-3323, 2016. URL:
  20. Ursula Hebert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In ICML, pages 1939-1948, 2018. Google Scholar
  21. Kosuke Imai, Luke Keele, and Teppei Yamamoto. Identification, inference and sensitivity analysis for causal mediation effects. Statistical science, pages 51-71, 2010. Google Scholar
  22. John W Jackson and Tyler J VanderWeele. Intersectional decomposition analysis with differential exposure, effects, and construct. Social Science & Medicine, 226:254-259, 2019. Google Scholar
  23. Pearl Judea. Causality: models, reasoning, and inference. Cambridge University Press, 2000. Google Scholar
  24. Michael J. Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In ICML, pages 2569-2577, 2018. URL:
  25. Niki Kilbertus, Philip J. Ball, Matt J. Kusner, Adrian Weller, and Ricardo Silva. The sensitivity of counterfactual fairness to unmeasured confounding. In UAI, page 213, 2019. URL:
  26. Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding discrimination through causal reasoning. In NIPS, pages 656-666, 2017. Google Scholar
  27. Michael P. Kim, Amirata Ghorbani, and James Y. Zou. Multiaccuracy: Black-box post-processing for fairness in classification. In AIES, pages 247-254, 2019. URL:
  28. Matt Kusner, Chris Russell, Joshua Loftus, and Ricardo Silva. Making decisions that reduce discriminatory impacts. In ICML, pages 3591-3600, 2019. Google Scholar
  29. Matt J Kusner and Joshua R Loftus. The long road to fairer algorithms. Nature, 2020. Google Scholar
  30. Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In NIPS, pages 4066-4076, 2017. URL:
  31. Preethi Lahoti, Krishna P. Gummadi, and Gerhard Weikum. ifair: Learning individually fair data representations for algorithmic decision making. In 35th IEEE International Conference on Data Engineering, ICDE 2019, Macao, China, April 8-11, 2019, pages 1334-1345. IEEE, 2019. URL:
  32. Tie-Yan Liu. Learning to Rank for Information Retrieval. Springer, 2011. URL:
  33. Joshua R Loftus, Chris Russell, Matt J Kusner, and Ricardo Silva. Causal reasoning for algorithmic fairness. arXiv preprint, 2018. URL:
  34. Bhaskar Mitra, Nick Craswell, et al. An introduction to neural information retrieval. Now Foundations and Trends, 2018. Google Scholar
  35. Razieh Nabi, Daniel Malinsky, and Ilya Shpitser. Learning optimal fair policies. In International Conference on Machine Learning, pages 4674-4682, 2019. Google Scholar
  36. Razieh Nabi and Ilya Shpitser. Fair inference on outcomes. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Google Scholar
  37. Safiya Umoja Noble. Algorithms of oppression: How search engines reinforce racism. nyu Press, 2018. Google Scholar
  38. Judea Pearl. Direct and indirect effects. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 411-420, 2001. Google Scholar
  39. James Robins. A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect. Mathematical modelling, 7(9-12):1393-1512, 1986. Google Scholar
  40. James M Robins. Semantics of causal dag models and the identification of direct and indirect effects. Oxford Statistical Science Series, pages 70-82, 2003. Google Scholar
  41. Chris Russell, Matt J Kusner, Joshua Loftus, and Ricardo Silva. When worlds collide: integrating different counterfactual assumptions in fairness. In NIPS, pages 6414-6423, 2017. Google Scholar
  42. Maya Sen and Omar Wasow. Race as a bundle of sticks: Designs that estimate effects of seemingly immutable characteristics. Annual Review of Political Science, 19, 2016. Google Scholar
  43. Stephanie A Shields. Gender: An intersectionality perspective. Sex roles, 59(5-6):301-311, 2008. Google Scholar
  44. Peter Spirtes, Clark N Glymour, Richard Scheines, and David Heckerman. Causation, prediction, and search. MIT press, 2000. Google Scholar
  45. Julia Stoyanovich, Ke Yang, and H. V. Jagadish. Online set selection with fairness and diversity constraints. In EDBT, pages 241-252., 2018. URL:
  46. Dustin Tingley, Teppei Yamamoto, Kentaro Hirose, Luke Keele, and Kosuke Imai. Mediation: R package for causal mediation analysis. Journal of Statistical Software, 2014. Google Scholar
  47. Tyler VanderWeele. Explanation in causal inference: methods for mediation and interaction. Oxford University Press, 2015. Google Scholar
  48. Yongkai Wu, Lu Zhang, and Xintao Wu. On discrimination discovery and removal in ranked data using causal graph. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 2536-2544, 2018. URL:
  49. Ke Yang, Vasilis Gkatzelis, and Julia Stoyanovich. Balanced ranking with diversity constraints. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 6035-6042, 2019. URL:
  50. Ke Yang, Joshua R. Loftus, and Julia Stoyanovich. Causal intersectionality for fair ranking, 2020. URL:
  51. Ke Yang and Julia Stoyanovich. Measuring fairness in ranked outputs. In ACM SSDBM, pages 22:1-22:6, 2017. URL:
  52. Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. Fa*ir: A fair top-k ranking algorithm. In Ee-Peng Lim, Marianne Winslett, Mark Sanderson, Ada Wai-Chee Fu, Jimeng Sun, J. Shane Culpepper, Eric Lo, Joyce C. Ho, Debora Donato, Rakesh Agrawal, Yu Zheng, Carlos Castillo, Aixin Sun, Vincent S. Tseng, and Chenliang Li, editors, Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 1569-1578. ACM, 2017. URL:
  53. Meike Zehlike and Carlos Castillo. Reducing disparate exposure in ranking: A learning to rank approach. In Yennun Huang, Irwin King, Tie-Yan Liu, and Maarten van Steen, editors, WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 2849-2855. ACM / IW3C2, 2020. URL:
  54. Meike Zehlike, Ke Yang, and Julia Stoyanovich. Fairness in ranking: A survey, 2021. URL:
  55. Junzhe Zhang and Elias Bareinboim. Fairness in decision-making - the causal explanation formula. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail