Search Results

Documents authored by Yona, Gal


Document
Decision-Making Under Miscalibration

Authors: Guy N. Rothblum and Gal Yona

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
How should we use ML-based predictions (e.g., risk of heart attack) to inform downstream binary classification decisions (e.g., undergoing a medical procedure)? When the risk estimates are perfectly calibrated, the answer is well understood: a classification problem’s cost structure induces an optimal treatment threshold j^⋆. In practice, however, predictors are often miscalibrated, and this can lead to harmful decisions. This raises a fundamental question: how should one use potentially miscalibrated predictions to inform binary decisions? In this work, we study this question from the perspective of algorithmic fairness. Specifically, we focus on the impact of decisions on protected demographic subgroups, when we are only given a bound on the predictor’s anticipated degree of subgroup-miscalibration. We formalize a natural (distribution-free) solution concept for translating predictions into decisions: given anticipated miscalibration of α, we propose using the threshold j that minimizes the worst-case regret over all α-miscalibrated predictors, where the regret is the difference in clinical utility between using the threshold in question and using the optimal threshold in hindsight. We provide closed form expressions for j when miscalibration is measured using both expected and maximum calibration error which reveal that it indeed differs from j^⋆ (the optimal threshold under perfect calibration).

Cite as

Guy N. Rothblum and Gal Yona. Decision-Making Under Miscalibration. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 92:1-92:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{rothblum_et_al:LIPIcs.ITCS.2023.92,
  author =	{Rothblum, Guy N. and Yona, Gal},
  title =	{{Decision-Making Under Miscalibration}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{92:1--92:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.92},
  URN =		{urn:nbn:de:0030-drops-175951},
  doi =		{10.4230/LIPIcs.ITCS.2023.92},
  annote =	{Keywords: risk prediction, calibration, algorithmic fairness, multi-group fairness}
}
Document
On Fairness and Stability in Two-Sided Matchings

Authors: Gili Karni, Guy N. Rothblum, and Gal Yona

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
There are growing concerns that algorithms, which increasingly make or influence important decisions pertaining to individuals, might produce outcomes that discriminate against protected groups. We study such fairness concerns in the context of a two-sided market, where there are two sets of agents, and each agent has preferences over the other set. The goal is producing a matching between the sets. Throughout this work, we use the example of matching medical residents (who we call "doctors") to hospitals. This setting has been the focus of a rich body of work. The seminal work of Gale and Shapley formulated a stability desideratum, and showed that a stable matching always exists and can be found in polynomial time. With fairness concerns in mind, it is natural to ask: might a stable matching be discriminatory towards some of the doctors? How can we obtain a fair matching? The question is interesting both when hospital preferences might be discriminatory, and also when each hospital’s preferences are fair. We study this question through the lens of metric-based fairness notions (Dwork et al. [ITCS 2012] and Kim et al. [ITCS 2020]). We formulate appropriate definitions of fairness and stability in the presence of a similarity metric, and ask: does a fair and stable matching always exist? Can such a matching be found in polynomial time? Can classical Gale-Shapley algorithms find such a matching? Our contributions are as follows: - Composition failures for classical algorithms. We show that composing the Gale-Shapley algorithm with fair hospital preferences can produce blatantly unfair outcomes. - New algorithms for finding fair and stable matchings. Our main technical contributions are efficient new algorithms for finding fair and stable matchings when: (i) the hospitals' preferences are fair, and (ii) the fairness metric satisfies a strong "proto-metric" condition: the distance between every two doctors is either zero or one. In particular, these algorithms also show that, in this setting, fairness and stability are compatible. - Barriers for finding fair and stable matchings in the general case. We show that if the hospital preferences can be unfair, or if the metric fails to satisfy the proto-metric condition, then no algorithm in a natural class can find a fair and stable matching. The natural class includes the classical Gale-Shapley algorithms and our new algorithms.

Cite as

Gili Karni, Guy N. Rothblum, and Gal Yona. On Fairness and Stability in Two-Sided Matchings. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 92:1-92:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{karni_et_al:LIPIcs.ITCS.2022.92,
  author =	{Karni, Gili and Rothblum, Guy N. and Yona, Gal},
  title =	{{On Fairness and Stability in Two-Sided Matchings}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{92:1--92:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.92},
  URN =		{urn:nbn:de:0030-drops-156880},
  doi =		{10.4230/LIPIcs.ITCS.2022.92},
  annote =	{Keywords: algorithmic fairness}
}
Document
Preference-Informed Fairness

Authors: Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
In this work, we study notions of fairness in decision-making systems when individuals have diverse preferences over the possible outcomes of the decisions. Our starting point is the seminal work of Dwork et al. [ITCS 2012] which introduced a notion of individual fairness (IF): given a task-specific similarity metric, every pair of individuals who are similarly qualified according to the metric should receive similar outcomes. We show that when individuals have diverse preferences over outcomes, requiring IF may unintentionally lead to less-preferred outcomes for the very individuals that IF aims to protect (e.g. a protected minority group). A natural alternative to IF is the classic notion of fair division, envy-freeness (EF): no individual should prefer another individual’s outcome over their own. Although EF allows for solutions where all individuals receive a highly-preferred outcome, EF may also be overly-restrictive for the decision-maker. For instance, if many individuals agree on the best outcome, then if any individual receives this outcome, they all must receive it, regardless of each individual’s underlying qualifications for the outcome. We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style constraints, but allows for deviations provided they are in line with individuals' preferences. We show that PIIF can permit outcomes that are more favorable to individuals than any IF solution, while providing considerably more flexibility to the decision-maker than EF. In addition, we show how to efficiently optimize any convex objective over the outcomes subject to PIIF for a rich class of individual preferences. Finally, we demonstrate the broad applicability of the PIIF framework by extending our definitions and algorithms to the multiple-task targeted advertising setting introduced by Dwork and Ilvento [ITCS 2019].

Cite as

Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona. Preference-Informed Fairness. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ITCS.2020.16,
  author =	{Kim, Michael P. and Korolova, Aleksandra and Rothblum, Guy N. and Yona, Gal},
  title =	{{Preference-Informed Fairness}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{16:1--16:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.16},
  URN =		{urn:nbn:de:0030-drops-117010},
  doi =		{10.4230/LIPIcs.ITCS.2020.16},
  annote =	{Keywords: algorithmic fairness}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail