3 Search Results for "Mullainathan, Sendhil"


Document
Mapping the Tradeoffs and Limitations of Algorithmic Fairness

Authors: Etam Benger and Katrina Ligett

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Sufficiency and separation are two fundamental criteria in classification fairness. For binary classifiers, these concepts correspond to subgroup calibration and equalized odds, respectively, and are known to be incompatible except in trivial cases. In this work, we explore a relaxation of these criteria based on f-divergences between distributions - essentially the same relaxation studied in the literature on approximate multicalibration - analyze their relationships, and derive implications for fair representations and downstream uses (post-processing) of representations. We show that when a protected attribute is determinable from features present in the data, the (relaxed) criteria of sufficiency and separation exhibit a tradeoff, forming a convex Pareto frontier. Moreover, we prove that when a protected attribute is not fully encoded in the data, achieving full sufficiency may be impossible. This finding not only strengthens the case against "fairness through unawareness" but also highlights an important caveat for work on (multi-)calibration.

Cite as

Etam Benger and Katrina Ligett. Mapping the Tradeoffs and Limitations of Algorithmic Fairness. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 19:1-19:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{benger_et_al:LIPIcs.FORC.2025.19,
  author =	{Benger, Etam and Ligett, Katrina},
  title =	{{Mapping the Tradeoffs and Limitations of Algorithmic Fairness}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{19:1--19:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.19},
  URN =		{urn:nbn:de:0030-drops-231465},
  doi =		{10.4230/LIPIcs.FORC.2025.19},
  annote =	{Keywords: Algorithmic fairness, information theory, sufficiency-separation tradeoff}
}
Document
Bias In, Bias Out? Evaluating the Folk Wisdom

Authors: Ashesh Rambachan and Jonathan Roth

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
We evaluate the folk wisdom that algorithmic decision rules trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a particular action, and so "biased" training data arise due to discriminatory selection into the training data. In our baseline model, the more biased the decision-maker is against a group, the more the algorithmic decision rule favors that group. We refer to this phenomenon as bias reversal. We then clarify the conditions that give rise to bias reversal. Whether a prediction algorithm reverses or inherits bias depends critically on how the decision-maker affects the training data as well as the label used in training. We illustrate our main theoretical results in a simulation study applied to the New York City Stop, Question and Frisk dataset.

Cite as

Ashesh Rambachan and Jonathan Roth. Bias In, Bias Out? Evaluating the Folk Wisdom. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 6:1-6:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{rambachan_et_al:LIPIcs.FORC.2020.6,
  author =	{Rambachan, Ashesh and Roth, Jonathan},
  title =	{{Bias In, Bias Out? Evaluating the Folk Wisdom}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{6:1--6:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.6},
  URN =		{urn:nbn:de:0030-drops-120225},
  doi =		{10.4230/LIPIcs.FORC.2020.6},
  annote =	{Keywords: fairness, selective labels, discrimination, training data}
}
Document
Inherent Trade-Offs in the Fair Determination of Risk Scores

Authors: Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.

Cite as

Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 43:1-43:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kleinberg_et_al:LIPIcs.ITCS.2017.43,
  author =	{Kleinberg, Jon and Mullainathan, Sendhil and Raghavan, Manish},
  title =	{{Inherent Trade-Offs in the Fair Determination of Risk Scores}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{43:1--43:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.43},
  URN =		{urn:nbn:de:0030-drops-81560},
  doi =		{10.4230/LIPIcs.ITCS.2017.43},
  annote =	{Keywords: algorithmic fairness, risk tools, calibration}
}
  • Refine by Type
  • 3 Document/PDF
  • 1 Document/HTML

  • Refine by Publication Year
  • 1 2025
  • 1 2020
  • 1 2017

  • Refine by Author
  • 1 Benger, Etam
  • 1 Kleinberg, Jon
  • 1 Ligett, Katrina
  • 1 Mullainathan, Sendhil
  • 1 Raghavan, Manish
  • Show More...

  • Refine by Series/Journal
  • 3 LIPIcs

  • Refine by Classification
  • 1 Applied computing → Economics
  • 1 Computing methodologies → Machine learning
  • 1 Mathematics of computing → Information theory
  • 1 Social and professional topics → Computing / technology policy
  • 1 Theory of computation → Machine learning theory

  • Refine by Keyword
  • 1 Algorithmic fairness
  • 1 algorithmic fairness
  • 1 calibration
  • 1 discrimination
  • 1 fairness
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail