Algorithms in the Presence of Biased Inputs (Invited Talk)

Author Nisheeth K. Vishnoi



PDF
Thumbnail PDF

File

LIPIcs.FSTTCS.2023.5.pdf
  • Filesize: 373 kB
  • 2 pages

Document Identifiers

Author Details

Nisheeth K. Vishnoi
  • Yale University, New Haven, CT, USA

Cite AsGet BibTex

Nisheeth K. Vishnoi. Algorithms in the Presence of Biased Inputs (Invited Talk). In 43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 284, pp. 5:1-5:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)
https://doi.org/10.4230/LIPIcs.FSTTCS.2023.5

Abstract

Algorithms for optimization problems such as selection, ranking, and classification typically assume that the inputs are what they are promised to be. However, in several real-world applications of these problems, the input may contain systematic biases along socially salient attributes associated with inputs such as race, gender, or political opinion. Such biases can not only lead the outputs of the current algorithms to output sub-optimal solutions with respect to true inputs but may also adversely affect opportunities for individuals in disadvantaged socially salient groups. This talk will consider the question of using optimization to solve the aforementioned problems in the presence of biased inputs. It will start with models of biases in inputs and discuss alternate ways to design algorithms for the underlying problem that can mitigate the effects of biases by taking into account knowledge about biases. This talk is based on several joint works with a number of co-authors.

Subject Classification

ACM Subject Classification
  • Theory of computation → Theory and algorithms for application domains
  • Theory of computation → Design and analysis of algorithms
  • Human-centered computing
Keywords
  • Algorithmic Bias

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Niclas Boehmer, L. Elisa Celis, Lingxiao Huang, Anay Mehrotra, and Nisheeth K. Vishnoi. Subset selection based on multiple rankings in the presence of bias: Effectiveness of fairness constraints for multiwinner voting score functions. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 2641-2688. PMLR, 2023. URL: https://proceedings.mlr.press/v202/boehmer23a.html.
  2. L. Elisa Celis, Chris Hays, Anay Mehrotra, and Nisheeth K. Vishnoi. The effect of the rooney rule on implicit bias in the long term. In Madeleine Clare Elish, William Isaac, and Richard S. Zemel, editors, FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021, pages 678-689. ACM, 2021. URL: https://doi.org/10.1145/3442188.3445930.
  3. L. Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K. Vishnoi. Fair classification with noisy protected attributes: A framework with provable guarantees. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 1349-1361. PMLR, 2021. URL: http://proceedings.mlr.press/v139/celis21a.html.
  4. L. Elisa Celis, Amit Kumar, Anay Mehrotra, and Nisheeth K. Vishnoi. Bias in evaluation processes: An optimization-based approach. In NeurIPS, 2023. Google Scholar
  5. L. Elisa Celis, Anay Mehrotra, and Nisheeth K. Vishnoi. Interventions for ranking in the presence of implicit bias. In Mireille Hildebrandt, Carlos Castillo, L. Elisa Celis, Salvatore Ruggieri, Linnet Taylor, and Gabriela Zanfir-Fortuna, editors, FAT* '20: Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, January 27-30, 2020, pages 369-380. ACM, 2020. URL: https://doi.org/10.1145/3351095.3372858.
  6. L. Elisa Celis, Anay Mehrotra, and Nisheeth K. Vishnoi. Fair classification with adversarial perturbations. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 8158-8171, 2021. URL: https://proceedings.neurips.cc/paper/2021/hash/44e207aecc63505eb828d442de03f2e9-Abstract.html.
  7. Lingxiao Huang and Nisheeth K. Vishnoi. Stable and fair classification. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2879-2890. PMLR, 2019. URL: http://proceedings.mlr.press/v97/huang19e.html.
  8. Anay Mehrotra, Bary S. R. Pradelski, and Nisheeth K. Vishnoi. Selection in the presence of implicit bias: The advantage of intersectional constraints. In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21-24, 2022, pages 599-609. ACM, 2022. URL: https://doi.org/10.1145/3531146.3533124.
  9. Anay Mehrotra and Nisheeth K. Vishnoi. Fair ranking with noisy protected attributes. In NeurIPS, 2022. URL: http://papers.nips.cc/paper_files/paper/2022/hash/cdd0640218a27e9e2c0e52e324e25db0-Abstract-Conference.html.
  10. Anay Mehrotra and Nisheeth K. Vishnoi. Maximizing submodular functions for recommendation in the presence of biases. In Ying Ding, Jie Tang, Juan F. Sequeda, Lora Aroyo, Carlos Castillo, and Geert-Jan Houben, editors, Proceedings of the ACM Web Conference 2023, WWW 2023, Austin, TX, USA, 30 April 2023 - 4 May 2023, pages 3625-3636. ACM, 2023. URL: https://doi.org/10.1145/3543507.3583195.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail