Obstructing Classification via Projection

Authors Pantea Haghighatkhah, Wouter Meulemans, Bettina Speckmann , Jérôme Urhausen, Kevin Verbeek



PDF
Thumbnail PDF

File

LIPIcs.MFCS.2021.56.pdf
  • Filesize: 0.93 MB
  • 19 pages

Document Identifiers

Author Details

Pantea Haghighatkhah
  • TU Eindhoven, The Netherlands
Wouter Meulemans
  • TU Eindhoven, The Netherlands
Bettina Speckmann
  • TU Eindhoven, The Netherlands
Jérôme Urhausen
  • Utrecht University, The Netherlands
Kevin Verbeek
  • TU Eindhoven, The Netherlands

Acknowledgements

Research on the topic of this paper was initiated at the 5th Workshop on Applied Geometric Algorithms (AGA 2020) in Langbroek, NL. The authors thank Jordi Vermeulen for initial discussions on the topic of this paper.

Cite AsGet BibTex

Pantea Haghighatkhah, Wouter Meulemans, Bettina Speckmann, Jérôme Urhausen, and Kevin Verbeek. Obstructing Classification via Projection. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 56:1-56:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)
https://doi.org/10.4230/LIPIcs.MFCS.2021.56

Abstract

Machine learning and data mining techniques are effective tools to classify large amounts of data. But they tend to preserve any inherent bias in the data, for example, with regards to gender or race. Removing such bias from data or the learned representations is quite challenging. In this paper we study a geometric problem which models a possible approach for bias removal. Our input is a set of points P in Euclidean space ℝ^d and each point is labeled with k binary-valued properties. A priori we assume that it is "easy" to classify the data according to each property. Our goal is to obstruct the classification according to one property by a suitable projection to a lower-dimensional Euclidean space ℝ^m (m < d), while classification according to all other properties remains easy. What it means for classification to be easy depends on the classification model used. We first consider classification by linear separability as employed by support vector machines. We use Kirchberger’s Theorem to show that, under certain conditions, a simple projection to ℝ^{d-1} suffices to eliminate the linear separability of one of the properties whilst maintaining the linear separability of the other properties. We also study the problem of maximizing the linear "inseparability" of the chosen property. Second, we consider more complex forms of separability and prove a connection between the number of projections required to obstruct classification and the Helly-type properties of such separabilities.

Subject Classification

ACM Subject Classification
  • Theory of computation → Computational geometry
  • Theory of computation → Models of learning
Keywords
  • Projection
  • classification
  • models of learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Mohsen Abbasi, Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. Fairness in representation: quantifying stereotyping as a representational harm. In Proc. SIAM International Conference on Data Mining, pages 801-809, 2019. URL: https://doi.org/10.1137/1.9781611975673.90.
  2. Alexander Amini, Ava P. Soleimany, Wilko Schwarting, Sangeeta N. Bhatia, and Daniela Rus. Uncovering and mitigating algorithmic bias through learned latent structure. In Proc. AAAI/ACM Conference on AI, Ethics, and Society, pages 289-295, 2019. URL: https://doi.org/10.1145/3306618.3314243.
  3. Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael J. Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. A convex framework for fair regression. arXiv:1706.02409, 2017. Google Scholar
  4. Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, 2006. Google Scholar
  5. Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems, pages 4349-4357, 2016. Google Scholar
  6. Shikha Bordia and Samuel R. Bowman. Identifying and reducing gender bias in word-level language models. In Proc. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 7-15, 2019. URL: https://doi.org/10.18653/v1/n19-3002.
  7. Marc E. Brunet, Colleen A. Houlihan, Ashton Anderson, and Richard S. Zemel. Understanding the origins of bias in word embeddings. In Proc. 36th International Conference on Machine Learning, volume 97, pages 803-811, 2019. Google Scholar
  8. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186, 2017. URL: https://doi.org/10.1126/science.aal4230.
  9. Sunipa Dev, Tao Li, Jeff M. Phillips, and Vivek Srikumar. On measuring and mitigating biased inferences of word embeddings. In Proc. AAAI Conference on Artificial Intelligence, pages 7659-7666, 2020. Google Scholar
  10. Sunipa Dev and Jeff Phillips. Attenuating bias in word vectors. In Proc. Machine Learning Research, pages 879-887, 2019. Google Scholar
  11. Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv, 2016. URL: http://arxiv.org/abs/1511.05897.
  12. Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proc. 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 259–268, 2015. Google Scholar
  13. Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems, pages 3315-3323, 2016. Google Scholar
  14. Yuzi He, Keith Burghardt, and Kristina Lerman. A geometric solution to fair representations. In Proc. AAAI/ACM Conference on AI, Ethics, and Society, page 279–285, 2020. URL: https://doi.org/10.1145/3375627.3375864.
  15. Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33:1-33, 2011. URL: https://doi.org/10.1007/s10115-011-0463-8.
  16. Paul Kirchberger. Über Tchebychefsche Annäherungsmethoden. Mathematische Annalen, 57:509-540, 1903. URL: https://doi.org/10.1007/BF01445182.
  17. Ted Lewis. Two counterexamples concerning transversals for convex subsets of the plane. Geometriae Dedicata, 9:461-465, 1980. URL: https://doi.org/10.1007/BF00181561.
  18. David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In Proc. 35th International Conference on Machine Learning, pages 3384-3393, 2018. Google Scholar
  19. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. arXiv, 2019. URL: http://arxiv.org/abs/1908.09635.
  20. Frédéric Meunier, Wolfgang Mulzer, Pauline Sarrabezolles, and Yannik Stein. The rainbow at the end of the line - A PPAD formulation of the colorful Carathéodory theorem with applications. In Proc. 28th ACM-SIAM Symposium on Discrete Algorithms, pages 1342-1351, 2017. URL: https://doi.org/10.1137/1.9781611974782.87.
  21. Yingjie Tian Naiyang Deng and Chunhua Zhang. Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions. CRC Press, 2012. Google Scholar
  22. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. Null it out: Guarding protected attributes by iterative nullspace projection. In Proc. 58th Annual Meeting of the Association for Computational Linguistics, pages 7237-7256, 2020. Google Scholar
  23. Jennifer L. Skeem and Christopher T. Lowenkamp. Risk, race, and recidivism: Predictive bias and disparate impact. Criminology, 54(4):680-712, 2016. URL: https://doi.org/10.1111/1745-9125.12123.
  24. Rephael Wenger. Helly-type theorems and geometric transversals. In Jacob E. Goodman and Joseph O'Rourke, editors, Handbook of Discrete and Computational Geometry, second edition, pages 73-96. Chapman and Hall/CRC, 2004. URL: https://doi.org/10.1201/9781420035315.ch4.
  25. Muhammad Bilal Zafar, Isabel Valera, Manuel G. Rodriguez, and Krishna P. Gummadi. Fairness constraints: Mechanisms for fair classification. In Proc. 20th International Conference on Artificial Intelligence and Statistics, volume 54, pages 962-970, 2017. Google Scholar
  26. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In Proc. 30th International Conference on Machine Learning, pages 325-333, 2013. Google Scholar
  27. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proc. AAAI/ACM Conference on AI, Ethics, and Society, pages 335-340, 2018. URL: https://doi.org/10.1145/3278721.3278779.
  28. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai Wei Chang. Learning gender-neutral word embeddings. In Proc. Conference on Empirical Methods in Natural Language Processing, pages 4847-4853, 2018. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail