Efficient Candidate Screening Under Multiple Tests and Implications for Fairness

Authors Lee Cohen, Zachary C. Lipton, Yishay Mansour



PDF
Thumbnail PDF

File

LIPIcs.FORC.2020.1.pdf
  • Filesize: 0.59 MB
  • 20 pages

Document Identifiers

Author Details

Lee Cohen
  • Tel Aviv University, Israel
Zachary C. Lipton
  • Carnegie Mellon University, Pittsburgh, PA, USA
  • Amazon AI, Palo Alto, CA, USA
Yishay Mansour
  • Tel Aviv University, Israel
  • Google Research, Tel Aviv, Israel

Cite AsGet BibTex

Lee Cohen, Zachary C. Lipton, and Yishay Mansour. Efficient Candidate Screening Under Multiple Tests and Implications for Fairness. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)
https://doi.org/10.4230/LIPIcs.FORC.2020.1

Abstract

When recruiting job candidates, employers rarely observe their underlying skill level directly. Instead, they must administer a series of interviews and/or collate other noisy signals in order to estimate the worker’s skill. Traditional economics papers address screening models where employers access worker skill via a single noisy signal. In this paper, we extend this theoretical analysis to a multi-test setting, considering both Bernoulli and Gaussian models. We analyze the optimal employer policy both when the employer sets a fixed number of tests per candidate and when the employer can set a dynamic policy, assigning further tests adaptively based on results from the previous tests. To start, we characterize the optimal policy when employees constitute a single group, demonstrating some interesting trade-offs. Subsequently, we address the multi-group setting, demonstrating that when the noise levels vary across groups, a fundamental impossibility emerges whereby we cannot administer the same number of tests, subject candidates to the same decision rule, and yet realize the same outcomes in both groups. We show that by subjecting members of noisier groups to more tests, we can equalize the confusion matrix entries across groups, seemingly eliminating any disparate impact concerning outcomes.

Subject Classification

ACM Subject Classification
  • Social and professional topics → Computing / technology policy
  • Mathematics of computing → Probabilistic inference problems
  • Computing methodologies → Unsupervised learning
Keywords
  • algorithmic fairness
  • random walk
  • inference

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Dennis J Aigner and Glen G Cain. Statistical theories of discrimination in labor markets. ILR Review, 30(2):175-187, 1977. Google Scholar
  2. Kenneth Arrow et al. The theory of discrimination. Discrimination in labor markets, 3(10):3-33, 1973. Google Scholar
  3. Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning. fairmlbook.org, 2019. URL: http://www.fairmlbook.org.
  4. Gary S Becker. The economics of discrimination chicago. University of Chicago, 1957. Google Scholar
  5. Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 2018. Google Scholar
  6. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153-163, 2017. Google Scholar
  7. Lee Cohen, Zachary C. Lipton, and Yishay Mansour. Efficient candidate screening under multiple tests and implications for fairness. CoRR, abs/1905.11361, 2019. URL: http://arxiv.org/abs/1905.11361.
  8. Sam Corbett-Davies and Sharad Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint, 2018. URL: http://arxiv.org/abs/1808.00023.
  9. Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubramanian. Runaway feedback loops in predictive policing. In Conference on Fairness, Accountability and Transparency (FAT*), 2018. Google Scholar
  10. William Feller. An Introduction to Probability Theory and Its Applications, volume 1. Wiley, January 1968. Google Scholar
  11. Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In Advances in neural information processing systems (NeurIPS), 2016. Google Scholar
  12. Lily Hu and Yiling Chen. A short-term intervention for long-term fairness in the labor market. In World Wide Web Conference (WWW), 2018. Google Scholar
  13. Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy. Discrimination aware decision tree learning. In International Conference on Data Mining (ICDM), 2010. Google Scholar
  14. Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In ICDM Workshops, 2011. Google Scholar
  15. Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. In Innovations in Theoretical Computer Science Conference (ITCS), 2017. Google Scholar
  16. Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data mining. In Knowledge Discovery in Databases (KDD), 2008. Google Scholar
  17. Edmund S Phelps. The statistical theory of racism and sexism. The american economic review, pages 659-661, 1972. Google Scholar
  18. Sven Schmit, Virag Shah, and Ramesh Johari. Optimal testing in the experiment-rich regime. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. Google Scholar
  19. Sahil Verma and Julia Rubin. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, FairWare '18, 2018. Google Scholar
  20. Ward Whitt. Uniform conditional stochastic order. Journal of Applied Probability, 17(1):112-123, 1980. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail