4 Search Results for "Raghavan, Manish"


Document
Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

Authors: Avrim Blum and Kevin Stangl

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
Multiple fairness constraints have been proposed in the literature, motivated by a range of concerns about how demographic groups might be treated unfairly by machine learning classifiers. In this work we consider a different motivation; learning from biased training data. We posit several ways in which training data may be biased, including having a more noisy or negatively biased labeling process on members of a disadvantaged group, or a decreased prevalence of positive or negative examples from the disadvantaged group, or both. Given such biased training data, Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution. We examine the ability of fairness-constrained ERM to correct this problem. In particular, we find that the Equal Opportunity fairness constraint [Hardt et al., 2016] combined with ERM will provably recover the Bayes optimal classifier under a range of bias models. We also consider other recovery methods including re-weighting the training data, Equalized Odds, and Demographic Parity, and Calibration. These theoretical results provide additional motivation for considering fairness interventions even if an actor cares primarily about accuracy.

Cite as

Avrim Blum and Kevin Stangl. Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.FORC.2020.3,
  author =	{Blum, Avrim and Stangl, Kevin},
  title =	{{Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.3},
  URN =		{urn:nbn:de:0030-drops-120192},
  doi =		{10.4230/LIPIcs.FORC.2020.3},
  annote =	{Keywords: fairness in machine learning, equal opportunity, bias, machine learning}
}
Document
Advancing Subgroup Fairness via Sleeping Experts

Authors: Avrim Blum and Thodoris Lykouris

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We study methods for improving fairness to subgroups in settings with overlapping populations and sequential predictions. Classical notions of fairness focus on the balance of some property across different populations. However, in many applications the goal of the different groups is not to be predicted equally but rather to be predicted well. We demonstrate that the task of satisfying this guarantee for multiple overlapping groups is not straightforward and show that for the simple objective of unweighted average of false negative and false positive rate, satisfying this for overlapping populations can be statistically impossible even when we are provided predictors that perform well separately on each subgroup. On the positive side, we show that when individuals are equally important to the different groups they belong to, this goal is achievable; to do so, we draw a connection to the sleeping experts literature in online learning. Motivated by the one-sided feedback in natural settings of interest, we extend our results to such a feedback model. We also provide a game-theoretic interpretation of our results, examining the incentives of participants to join the system and to provide the system full information about predictors they may possess. We end with several interesting open problems concerning the strength of guarantees that can be achieved in a computationally efficient manner.

Cite as

Avrim Blum and Thodoris Lykouris. Advancing Subgroup Fairness via Sleeping Experts. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 55:1-55:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.ITCS.2020.55,
  author =	{Blum, Avrim and Lykouris, Thodoris},
  title =	{{Advancing Subgroup Fairness via Sleeping Experts}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{55:1--55:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.55},
  URN =		{urn:nbn:de:0030-drops-117402},
  doi =		{10.4230/LIPIcs.ITCS.2020.55},
  annote =	{Keywords: Online learning, Fairness, Game theory}
}
Document
Selection Problems in the Presence of Implicit Bias

Authors: Jon Kleinberg and Manish Raghavan

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Over the past two decades, the notion of implicit bias has come to serve as an important com- ponent in our understanding of bias and discrimination in activities such as hiring, promotion, and school admissions. Research on implicit bias posits that when people evaluate others - for example, in a hiring context - their unconscious biases about membership in particular demo- graphic groups can have an effect on their decision-making, even when they have no deliberate intention to discriminate against members of these groups. A growing body of experimental work has demonstrated the effect that implicit bias can have in producing adverse outcomes. Here we propose a theoretical model for studying the effects of implicit bias on selection decisions, and a way of analyzing possible procedural remedies for implicit bias within this model. A canonical situation represented by our model is a hiring setting, in which recruiters are trying to evaluate the future potential of job applicants, but their estimates of potential are skewed by an unconscious bias against members of one group. In this model, we show that measures such as the Rooney Rule, a requirement that at least one member of an underrepresented group be selected, can not only improve the representation of the affected group, but also lead to higher payoffs in absolute terms for the organization performing the recruiting. However, identifying the conditions under which such measures can lead to improved payoffs involves subtle trade- offs between the extent of the bias and the underlying distribution of applicant characteristics, leading to novel theoretical questions about order statistics in the presence of probabilistic side information.

Cite as

Jon Kleinberg and Manish Raghavan. Selection Problems in the Presence of Implicit Bias. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 33:1-33:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kleinberg_et_al:LIPIcs.ITCS.2018.33,
  author =	{Kleinberg, Jon and Raghavan, Manish},
  title =	{{Selection Problems in the Presence of Implicit Bias}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{33:1--33:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.33},
  URN =		{urn:nbn:de:0030-drops-83234},
  doi =		{10.4230/LIPIcs.ITCS.2018.33},
  annote =	{Keywords: algorithmic fairness, power laws, order statistics, implicit bias, Rooney Rule}
}
Document
Inherent Trade-Offs in the Fair Determination of Risk Scores

Authors: Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.

Cite as

Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 43:1-43:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kleinberg_et_al:LIPIcs.ITCS.2017.43,
  author =	{Kleinberg, Jon and Mullainathan, Sendhil and Raghavan, Manish},
  title =	{{Inherent Trade-Offs in the Fair Determination of Risk Scores}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{43:1--43:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.43},
  URN =		{urn:nbn:de:0030-drops-81560},
  doi =		{10.4230/LIPIcs.ITCS.2017.43},
  annote =	{Keywords: algorithmic fairness, risk tools, calibration}
}
  • Refine by Author
  • 2 Blum, Avrim
  • 2 Kleinberg, Jon
  • 2 Raghavan, Manish
  • 1 Lykouris, Thodoris
  • 1 Mullainathan, Sendhil
  • Show More...

  • Refine by Classification
  • 1 Theory of computation → Machine learning theory
  • 1 Theory of computation → Online learning algorithms

  • Refine by Keyword
  • 2 algorithmic fairness
  • 1 Fairness
  • 1 Game theory
  • 1 Online learning
  • 1 Rooney Rule
  • Show More...

  • Refine by Type
  • 4 document

  • Refine by Publication Year
  • 2 2020
  • 1 2017
  • 1 2018

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail