Search Results

Documents authored by Beyhaghi, Hedyeh


Document
Setting Fair Incentives to Maximize Improvement

Authors: Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
We consider the problem of helping agents improve by setting goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach (or do nothing if no target level is within reach). We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and we consider fairness objectives when the agents belong to different underlying populations. We prove algorithmic, learning, and structural results for each model. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement; agents who previously tried hard to reach a distant target now have a closer target to reach and hence improve less. This especially presents a challenge when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, despite the non-monotonicity property and interference of the target levels, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we illustrate counterexamples of this result in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents.

Cite as

Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita. Setting Fair Incentives to Maximize Improvement. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ahmadi_et_al:LIPIcs.FORC.2023.5,
  author =	{Ahmadi, Saba and Beyhaghi, Hedyeh and Blum, Avrim and Naggita, Keziah},
  title =	{{Setting Fair Incentives to Maximize Improvement}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.5},
  URN =		{urn:nbn:de:0030-drops-179261},
  doi =		{10.4230/LIPIcs.FORC.2023.5},
  annote =	{Keywords: Algorithmic Fairness, Learning for Strategic Behavior, Incentivizing Improvement}
}
Document
Screening with Disadvantaged Agents

Authors: Hedyeh Beyhaghi, Modibo K. Camara, Jason Hartline, Aleck Johnsen, and Sheng Long

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Motivated by school admissions, this paper studies screening in a population with both advantaged and disadvantaged agents. A school is interested in admitting the most skilled students, but relies on imperfect test scores that reflect both skill and effort. Students are limited by a budget on effort, with disadvantaged students having tighter budgets. This raises a challenge for the principal: among agents with similar test scores, it is difficult to distinguish between students with high skills and students with large budgets. Our main result is an optimal stochastic mechanism that maximizes the gains achieved from admitting "high-skill" students minus the costs incurred from admitting "low-skill" students when considering two skill types and n budget types. Our mechanism makes it possible to give higher probability of admission to a high-skill student than to a low-skill, even when the low-skill student can potentially get higher test-score due to a higher budget. Further, we extend our admission problem to a setting in which students uniformly receive an exogenous subsidy to increase their budget for effort. This extension can only help the school’s admission objective and we show that the optimal mechanism with exogenous subsidies has the same characterization as optimal mechanisms for the original problem.

Cite as

Hedyeh Beyhaghi, Modibo K. Camara, Jason Hartline, Aleck Johnsen, and Sheng Long. Screening with Disadvantaged Agents. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{beyhaghi_et_al:LIPIcs.FORC.2023.6,
  author =	{Beyhaghi, Hedyeh and Camara, Modibo K. and Hartline, Jason and Johnsen, Aleck and Long, Sheng},
  title =	{{Screening with Disadvantaged Agents}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.6},
  URN =		{urn:nbn:de:0030-drops-179274},
  doi =		{10.4230/LIPIcs.FORC.2023.6},
  annote =	{Keywords: screening, strategic classification, budgeted mechanism design, fairness, effort-incentives, subsidies, school admission}
}
Document
On Classification of Strategic Agents Who Can Both Game and Improve

Authors: Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita

Published in: LIPIcs, Volume 218, 3rd Symposium on Foundations of Responsible Computing (FORC 2022)


Abstract
In this work, we consider classification of agents who can both game and improve. For example, people wishing to get a loan may be able to take some actions that increase their perceived credit-worthiness and others that also increase their true credit-worthiness. A decision-maker would like to define a classification rule with few false-positives (does not give out many bad loans) while yielding many true positives (giving out many good loans), which includes encouraging agents to improve to become true positives if possible. We consider two models for this problem, a general discrete model and a linear model, and prove algorithmic, learning, and hardness results for each. For the general discrete model, we give an efficient algorithm for the problem of maximizing the number of true positives subject to no false positives, and show how to extend this to a partial-information learning setting. We also show hardness for the problem of maximizing the number of true positives subject to a nonzero bound on the number of false positives, and that this hardness holds even for a finite-point version of our linear model. We also show that maximizing the number of true positives subject to no false positive is NP-hard in our full linear model. We additionally provide an algorithm that determines whether there exists a linear classifier that classifies all agents accurately and causes all improvable agents to become qualified, and give additional results for low-dimensional data.

Cite as

Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita. On Classification of Strategic Agents Who Can Both Game and Improve. In 3rd Symposium on Foundations of Responsible Computing (FORC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 218, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ahmadi_et_al:LIPIcs.FORC.2022.3,
  author =	{Ahmadi, Saba and Beyhaghi, Hedyeh and Blum, Avrim and Naggita, Keziah},
  title =	{{On Classification of Strategic Agents Who Can Both Game and Improve}},
  booktitle =	{3rd Symposium on Foundations of Responsible Computing (FORC 2022)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-226-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{218},
  editor =	{Celis, L. Elisa},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2022.3},
  URN =		{urn:nbn:de:0030-drops-165269},
  doi =		{10.4230/LIPIcs.FORC.2022.3},
  annote =	{Keywords: Strategic Classification, Social Welfare, Learning}
}
Document
Randomness and Fairness in Two-Sided Matching with Limited Interviews

Authors: Hedyeh Beyhaghi and Éva Tardos

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We study the outcome in a matching market where both sides have limited ability to consider options. For example, in the national residency matching program, doctors are limited to apply to a small set of hospitals, and hospitals are limited by the time required to interview candidates. Our main findings are the following: (1) In markets where jobs can only consider a limited number of candidates for interview, it increases the size of the resulting matching if the system has a limit on the number of applications a candidate can send. (2) The fair system of all applicants being allowed to apply to the exact same number of positions maximizes the expected size of the matching. More particularly, starting from an integer k as the number of applications, the matching size decreases as a few applicants are allowed to apply to one additional position (and then increases again as they are all allowed to apply to k+1). Although it seems natural to expect that the size of the matching would be a monotone increasing and concave function in the number of applications, our results show that neither is true. These results hold even in a market where a-priori all jobs and all candidates are equally likely to be good, and the judgments of different employers and candidates are independent. Our main technical contribution is computing the expected size of the matching found via the deferred acceptance algorithm as a function of the number of interviews and applications in a market where preferences are uniform and independent. Through simulations we confirm that these findings extend to markets where rankings become correlated after the interviews.

Cite as

Hedyeh Beyhaghi and Éva Tardos. Randomness and Fairness in Two-Sided Matching with Limited Interviews. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 74:1-74:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{beyhaghi_et_al:LIPIcs.ITCS.2021.74,
  author =	{Beyhaghi, Hedyeh and Tardos, \'{E}va},
  title =	{{Randomness and Fairness in Two-Sided Matching with Limited Interviews}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{74:1--74:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.74},
  URN =		{urn:nbn:de:0030-drops-136139},
  doi =		{10.4230/LIPIcs.ITCS.2021.74},
  annote =	{Keywords: Matching with Short Lists, Stable Matching, Balls in Bins Problem}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail