LIPIcs, Volume 256

4th Symposium on Foundations of Responsible Computing (FORC 2023)



Thumbnail PDF

Event

FORC 2023, June 7-9, 2023, Stanford University, California, USA

Editor

Kunal Talwar
  • Apple, Cupertino, CA, USA

Publication Details

  • published at: 2023-06-04
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-272-3
  • DBLP: db/conf/forc/forc2023

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 256, FORC 2023, Complete Volume

Authors: Kunal Talwar


Abstract
LIPIcs, Volume 256, FORC 2023, Complete Volume

Cite as

4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 1-230, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Proceedings{talwar:LIPIcs.FORC.2023,
  title =	{{LIPIcs, Volume 256, FORC 2023, Complete Volume}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{1--230},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023},
  URN =		{urn:nbn:de:0030-drops-179202},
  doi =		{10.4230/LIPIcs.FORC.2023},
  annote =	{Keywords: LIPIcs, Volume 256, FORC 2023, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Kunal Talwar


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 0:i-0:x, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{talwar:LIPIcs.FORC.2023.0,
  author =	{Talwar, Kunal},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{0:i--0:x},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.0},
  URN =		{urn:nbn:de:0030-drops-179219},
  doi =		{10.4230/LIPIcs.FORC.2023.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
From the Real Towards the Ideal: Risk Prediction in a Better World

Authors: Cynthia Dwork, Omer Reingold, and Guy N. Rothblum


Abstract
Prediction algorithms assign scores in [0,1] to individuals, often interpreted as "probabilities" of a positive outcome, for example, of repaying a loan or succeeding in a job. Success, however, rarely depends only on the individual: it is a function of the individual’s interaction with the environment, past and present. Environments do not treat all demographic groups equally. We initiate the study of corrective transformations τ that map predictors of success in the real world to predictors in a better world. In the language of algorithmic fairness, letting p^* denote the true probabilities of success in the real, unfair, world, we characterize the transformations τ for which it is feasible to find a predictor q̃ that is indistinguishable from τ(p^*). The problem is challenging because we do not have access to probabilities or even outcomes in a better world. Nor do we have access to probabilities p^* in the real world. The only data available for training are outcomes from the real world. We obtain a complete characterization of when it is possible to learn predictors that are indistinguishable from τ(p^*), in the form of a simple-to-state criterion describing necessary and sufficient conditions for doing so. This criterion is inextricably bound with the very existence of uncertainty.

Cite as

Cynthia Dwork, Omer Reingold, and Guy N. Rothblum. From the Real Towards the Ideal: Risk Prediction in a Better World. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 1:1-1:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2023.1,
  author =	{Dwork, Cynthia and Reingold, Omer and Rothblum, Guy N.},
  title =	{{From the Real Towards the Ideal: Risk Prediction in a Better World}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{1:1--1:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.1},
  URN =		{urn:nbn:de:0030-drops-179224},
  doi =		{10.4230/LIPIcs.FORC.2023.1},
  annote =	{Keywords: Algorithmic Fairness, Affirmative Action, Learning, Predictions, Multicalibration, Outcome Indistinguishability}
}
Document
New Algorithms and Applications for Risk-Limiting Audits

Authors: Bar Karov and Moni Naor


Abstract
Risk-limiting audits (RLAs) are a significant tool in increasing confidence in the accuracy of elections. They consist of randomized algorithms which check that an election’s vote tally, as reported by a vote tabulation system, corresponds to the correct candidates winning. If an initial vote count leads to the wrong election winner, an RLA guarantees to identify the error with high probability over its own randomness. These audits operate by sequentially sampling and examining ballots until they can either confirm the reported winner or identify the true winner. The first part of this work suggests a new generic method, called "Batchcomp", for converting classical (ballot-level) RLAs into ones that operate on batches. As a concrete application of the suggested method, we develop the first RLA for the Israeli Knesset elections, and convert it to one which operates on batches using "Batchcomp". We ran this suggested method on the real results of recent Knesset elections. The second part of this work suggests a new use-case for RLAs: verifying that a population census leads to the correct allocation of parliament seats to a nation’s federal-states. We present an adaptation of ALPHA [Stark, 2023], an existing RLA method, to a method which applies to censuses. This suggested census RLA relies on data from both the census and from an additional procedure which is already conducted in many countries today, called a post-enumeration survey.

Cite as

Bar Karov and Moni Naor. New Algorithms and Applications for Risk-Limiting Audits. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 2:1-2:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{karov_et_al:LIPIcs.FORC.2023.2,
  author =	{Karov, Bar and Naor, Moni},
  title =	{{New Algorithms and Applications for Risk-Limiting Audits}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{2:1--2:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.2},
  URN =		{urn:nbn:de:0030-drops-179232},
  doi =		{10.4230/LIPIcs.FORC.2023.2},
  annote =	{Keywords: Risk-Limiting Audit, RLA, Batch-Level RLA, Census}
}
Document
Bidding Strategies for Proportional Representation in Advertisement Campaigns

Authors: Inbal Livni Navon, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen


Abstract
Many companies rely on advertising platforms such as Google, Facebook, or Instagram to recruit a large and diverse applicant pool for job openings. Prior works have shown that equitable bidding may not result in equitable outcomes due to heterogeneous levels of competition for different types of individuals. Suggestions have been made to address this problem via revisions to the advertising platform. However, it may be challenging to convince platforms to undergo a costly re-vamp of their system, and in addition it might not offer the flexibility necessary to capture the many types of fairness notions and other constraints that advertisers would like to ensure. Instead, we consider alterations that make no change to the platform mechanism and instead change the bidding strategies used by advertisers. We compare two natural fairness objectives: one in which the advertisers must treat groups equally when bidding in order to achieve a yield with group-parity guarantees, and another in which the bids are not constrained and only the yield must satisfy parity constraints. We show that requiring parity with respect to both bids and yield can result in an arbitrarily large decrease in efficiency compared to requiring equal yield proportions alone. We find that autobidding is a natural way to realize this latter objective and show how existing work in this area can be extended to provide efficient bidding strategies that provide high utility while satisfying group parity constraints as well as deterministic and randomized rounding techniques to uphold these guarantees. Finally, we demonstrate the effectiveness of our proposed solutions on data adapted from a real-world employment dataset.

Cite as

Inbal Livni Navon, Charlotte Peale, Omer Reingold, and Judy Hanwen Shen. Bidding Strategies for Proportional Representation in Advertisement Campaigns. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{navon_et_al:LIPIcs.FORC.2023.3,
  author =	{Navon, Inbal Livni and Peale, Charlotte and Reingold, Omer and Shen, Judy Hanwen},
  title =	{{Bidding Strategies for Proportional Representation in Advertisement Campaigns}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.3},
  URN =		{urn:nbn:de:0030-drops-179245},
  doi =		{10.4230/LIPIcs.FORC.2023.3},
  annote =	{Keywords: Algorithmic fairness, diversity, advertisement auctions}
}
Document
Multiplicative Metric Fairness Under Composition

Authors: Milan Mossé


Abstract
Dwork, Hardt, Pitassi, Reingold, & Zemel [Dwork et al., 2012] introduced two notions of fairness, each of which is meant to formalize the notion of similar treatment for similarly qualified individuals. The first of these notions, which we call additive metric fairness, has received much attention in subsequent work studying the fairness of a system composed of classifiers which are fair when considered in isolation [Chawla and Jagadeesan, 2020; Chawla et al., 2022; Dwork and Ilvento, 2018; Dwork et al., 2020; Ilvento et al., 2020] and in work studying the relationship between fair treatment of individuals and fair treatment of groups [Dwork et al., 2012; Dwork and Ilvento, 2018; Kim et al., 2018]. Here, we extend these lines of research to the second, less-studied notion, which we call multiplicative metric fairness. In particular, we exactly characterize the fairness of conjunctions and disjunctions of multiplicative metric fair classifiers, and the extent to which a classifier which satisfies multiplicative metric fairness also treats groups fairly. This characterization reveals that whereas additive metric fairness becomes easier to satisfy when probabilities of acceptance are small, leading to unfairness under functional and group compositions, multiplicative metric fairness is better-behaved, due to its scale-invariance.

Cite as

Milan Mossé. Multiplicative Metric Fairness Under Composition. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 4:1-4:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mosse:LIPIcs.FORC.2023.4,
  author =	{Moss\'{e}, Milan},
  title =	{{Multiplicative Metric Fairness Under Composition}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{4:1--4:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.4},
  URN =		{urn:nbn:de:0030-drops-179250},
  doi =		{10.4230/LIPIcs.FORC.2023.4},
  annote =	{Keywords: algorithmic fairness, metric fairness, fairness under composition}
}
Document
Setting Fair Incentives to Maximize Improvement

Authors: Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita


Abstract
We consider the problem of helping agents improve by setting goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach (or do nothing if no target level is within reach). We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and we consider fairness objectives when the agents belong to different underlying populations. We prove algorithmic, learning, and structural results for each model. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement; agents who previously tried hard to reach a distant target now have a closer target to reach and hence improve less. This especially presents a challenge when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, despite the non-monotonicity property and interference of the target levels, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we illustrate counterexamples of this result in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents.

Cite as

Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita. Setting Fair Incentives to Maximize Improvement. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ahmadi_et_al:LIPIcs.FORC.2023.5,
  author =	{Ahmadi, Saba and Beyhaghi, Hedyeh and Blum, Avrim and Naggita, Keziah},
  title =	{{Setting Fair Incentives to Maximize Improvement}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.5},
  URN =		{urn:nbn:de:0030-drops-179261},
  doi =		{10.4230/LIPIcs.FORC.2023.5},
  annote =	{Keywords: Algorithmic Fairness, Learning for Strategic Behavior, Incentivizing Improvement}
}
Document
Screening with Disadvantaged Agents

Authors: Hedyeh Beyhaghi, Modibo K. Camara, Jason Hartline, Aleck Johnsen, and Sheng Long


Abstract
Motivated by school admissions, this paper studies screening in a population with both advantaged and disadvantaged agents. A school is interested in admitting the most skilled students, but relies on imperfect test scores that reflect both skill and effort. Students are limited by a budget on effort, with disadvantaged students having tighter budgets. This raises a challenge for the principal: among agents with similar test scores, it is difficult to distinguish between students with high skills and students with large budgets. Our main result is an optimal stochastic mechanism that maximizes the gains achieved from admitting "high-skill" students minus the costs incurred from admitting "low-skill" students when considering two skill types and n budget types. Our mechanism makes it possible to give higher probability of admission to a high-skill student than to a low-skill, even when the low-skill student can potentially get higher test-score due to a higher budget. Further, we extend our admission problem to a setting in which students uniformly receive an exogenous subsidy to increase their budget for effort. This extension can only help the school’s admission objective and we show that the optimal mechanism with exogenous subsidies has the same characterization as optimal mechanisms for the original problem.

Cite as

Hedyeh Beyhaghi, Modibo K. Camara, Jason Hartline, Aleck Johnsen, and Sheng Long. Screening with Disadvantaged Agents. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{beyhaghi_et_al:LIPIcs.FORC.2023.6,
  author =	{Beyhaghi, Hedyeh and Camara, Modibo K. and Hartline, Jason and Johnsen, Aleck and Long, Sheng},
  title =	{{Screening with Disadvantaged Agents}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.6},
  URN =		{urn:nbn:de:0030-drops-179274},
  doi =		{10.4230/LIPIcs.FORC.2023.6},
  annote =	{Keywords: screening, strategic classification, budgeted mechanism design, fairness, effort-incentives, subsidies, school admission}
}
Document
Fair Grading Algorithms for Randomized Exams

Authors: Jiale Chen, Jason Hartline, and Onno Zoeter


Abstract
This paper studies grading algorithms for randomized exams. In a randomized exam, each student is asked a small number of random questions from a large question bank. The predominant grading rule is simple averaging, i.e., calculating grades by averaging scores on the questions each student is asked, which is fair ex-ante, over the randomized questions, but not fair ex-post, on the realized questions. The fair grading problem is to estimate the average grade of each student on the full question bank. The maximum-likelihood estimator for the Bradley-Terry-Luce model on the bipartite student-question graph is shown to be consistent with high probability when the number of questions asked to each student is at least the cubed-logarithm of the number of students. In an empirical study on exam data and in simulations, our algorithm based on the maximum-likelihood estimator significantly outperforms simple averaging in prediction accuracy and ex-post fairness even with a small class and exam size.

Cite as

Jiale Chen, Jason Hartline, and Onno Zoeter. Fair Grading Algorithms for Randomized Exams. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 7:1-7:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.FORC.2023.7,
  author =	{Chen, Jiale and Hartline, Jason and Zoeter, Onno},
  title =	{{Fair Grading Algorithms for Randomized Exams}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{7:1--7:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.7},
  URN =		{urn:nbn:de:0030-drops-179282},
  doi =		{10.4230/LIPIcs.FORC.2023.7},
  annote =	{Keywords: Ex-ante and Ex-post Fairness, Item Response Theory, Algorithmic Fairness in Education}
}
Document
An Algorithmic Approach to Address Course Enrollment Challenges

Authors: Arpita Biswas, Yiduo Ke, Samir Khuller, and Quanquan C. Liu


Abstract
Massive surges of enrollments in courses have led to a crisis in several computer science departments - not only is the demand for certain courses extremely high from majors, but the demand from non-majors is also very high. Much of the time, this leads to significant frustration on the part of the students, and getting seats in desired courses is a rather ad-hoc process. One approach is to first collect information from students about which courses they want to take and to develop optimization models for assigning students to available seats in a fair manner. What makes this problem complex is that the courses themselves have time conflicts, and the students have credit caps (an upper bound on the number of courses they would like to enroll in). We model this problem as follows. We have n agents (students), and there are "resources" (these correspond to courses). Each agent is only interested in a subset of the resources (courses of interest), and each resource can only be assigned to a bounded number of agents (available seats). In addition, each resource corresponds to an interval of time, and the objective is to assign non-overlapping resources to agents so as to produce "fair and high utility" schedules. In this model, we provide a number of results under various settings and objective functions. Specifically, in this paper, we consider the following objective functions: total utility, max-min (Santa Claus objective), and envy-freeness. The total utility objective function maximizes the sum of the utilities of all courses assigned to students. The max-min objective maximizes the minimum utility obtained by any student. Finally, envy-freeness ensures that no student envies another student’s allocation. Under these settings and objective functions, we show a number of theoretical results. Specifically, we show that the course allocation under the time conflicts problem is NP-complete but becomes polynomial-time solvable when given only a constant number of students or all credits, course lengths, and utilities are uniform. Furthermore, we give a near-linear time algorithm for obtaining a constant 1/2-factor approximation for the general maximizing total utility problem when utility functions are binary. In addition, we show that there exists a near-linear time algorithm that obtains a 1/2-factor approximation on total utility and a 1/4-factor approximation on max-min utility when given uniform credit caps and uniform utilities. For the setting of binary valuations, we show three polynomial time algorithms for 1/2-factor approximation of total utility, envy-freeness up to one item, and a constant factor approximation of the max-min utility value when course lengths are within a constant factor of each other. Finally, we conclude with experimental results that demonstrate that our algorithms yield high-quality results in real-world settings.

Cite as

Arpita Biswas, Yiduo Ke, Samir Khuller, and Quanquan C. Liu. An Algorithmic Approach to Address Course Enrollment Challenges. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 8:1-8:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{biswas_et_al:LIPIcs.FORC.2023.8,
  author =	{Biswas, Arpita and Ke, Yiduo and Khuller, Samir and Liu, Quanquan C.},
  title =	{{An Algorithmic Approach to Address Course Enrollment Challenges}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{8:1--8:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.8},
  URN =		{urn:nbn:de:0030-drops-179297},
  doi =		{10.4230/LIPIcs.FORC.2023.8},
  annote =	{Keywords: fairness, allocation, matching, algorithms}
}
Document
Fair Correlation Clustering in Forests

Authors: Katrin Casel, Tobias Friedrich, Martin Schirneck, and Simon Wietheger


Abstract
The study of algorithmic fairness received growing attention recently. This stems from the awareness that bias in the input data for machine learning systems may result in discriminatory outputs. For clustering tasks, one of the most central notions of fairness is the formalization by Chierichetti, Kumar, Lattanzi, and Vassilvitskii [NeurIPS 2017]. A clustering is said to be fair, if each cluster has the same distribution of manifestations of a sensitive attribute as the whole input set. This is motivated by various applications where the objects to be clustered have sensitive attributes that should not be over- or underrepresented. Most research on this version of fair clustering has focused on centriod-based objectives. In contrast, we discuss the applicability of this fairness notion to Correlation Clustering. The existing literature on the resulting Fair Correlation Clustering problem either presents approximation algorithms with poor approximation guarantees or severely limits the possible distributions of the sensitive attribute (often only two manifestations with a 1:1 ratio are considered). Our goal is to understand if there is hope for better results in between these two extremes. To this end, we consider restricted graph classes which allow us to characterize the distributions of sensitive attributes for which this form of fairness is tractable from a complexity point of view. While existing work on Fair Correlation Clustering gives approximation algorithms, we focus on exact solutions and investigate whether there are efficiently solvable instances. The unfair version of Correlation Clustering is trivial on forests, but adding fairness creates a surprisingly rich picture of complexities. We give an overview of the distributions and types of forests where Fair Correlation Clustering turns from tractable to intractable. As the most surprising insight, we consider the fact that the cause of the hardness of Fair Correlation Clustering is not the strictness of the fairness condition. We lift most of our results to also hold for the relaxed version of the fairness condition. Instead, the source of hardness seems to be the distribution of the sensitive attribute. On the positive side, we identify some reasonable distributions that are indeed tractable. While this tractability is only shown for forests, it may open an avenue to design reasonable approximations for larger graph classes.

Cite as

Katrin Casel, Tobias Friedrich, Martin Schirneck, and Simon Wietheger. Fair Correlation Clustering in Forests. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{casel_et_al:LIPIcs.FORC.2023.9,
  author =	{Casel, Katrin and Friedrich, Tobias and Schirneck, Martin and Wietheger, Simon},
  title =	{{Fair Correlation Clustering in Forests}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{9:1--9:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.9},
  URN =		{urn:nbn:de:0030-drops-179307},
  doi =		{10.4230/LIPIcs.FORC.2023.9},
  annote =	{Keywords: correlation clustering, disparate impact, fair clustering, relaxed fairness}
}
Document
Distributionally Robust Data Join

Authors: Pranjal Awasthi, Christopher Jung, and Jamie Morgenstern


Abstract
Suppose we are given two datasets: a labeled dataset and unlabeled dataset which also has additional auxiliary features not present in the first dataset. What is the most principled way to use these datasets together to construct a predictor? The answer should depend upon whether these datasets are generated by the same or different distributions over their mutual feature sets, and how similar the test distribution will be to either of those distributions. In many applications, the two datasets will likely follow different distributions, but both may be close to the test distribution. We introduce the problem of building a predictor which minimizes the maximum loss over all probability distributions over the original features, auxiliary features, and binary labels, whose Wasserstein distance is r₁ away from the empirical distribution over the labeled dataset and r₂ away from that of the unlabeled dataset. This can be thought of as a generalization of distributionally robust optimization (DRO), which allows for two data sources, one of which is unlabeled and may contain auxiliary features.

Cite as

Pranjal Awasthi, Christopher Jung, and Jamie Morgenstern. Distributionally Robust Data Join. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{awasthi_et_al:LIPIcs.FORC.2023.10,
  author =	{Awasthi, Pranjal and Jung, Christopher and Morgenstern, Jamie},
  title =	{{Distributionally Robust Data Join}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.10},
  URN =		{urn:nbn:de:0030-drops-179311},
  doi =		{10.4230/LIPIcs.FORC.2023.10},
  annote =	{Keywords: Distributionally Robust Optimization, Semi-Supervised Learning, Learning Theory}
}
Document
Resistance to Timing Attacks for Sampling and Privacy Preserving Schemes

Authors: Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik


Abstract
Side channel attacks, and in particular timing attacks, are a fundamental obstacle for secure implementation of algorithms and cryptographic protocols. These attacks and countermeasures have been widely researched for decades. We offer a new perspective on resistance to timing attacks. We focus on sampling algorithms and their application to differential privacy. We define sampling algorithms that do not reveal information about the sampled output through their running time. More specifically: (1) We characterize the distributions that can be sampled from in a "time oblivious" way, meaning that the running time does not leak any information about the output. We provide an optimal algorithm in terms of randomness used to sample for these distributions. We give an example of an efficient randomized algorithm 𝒜 such that there is no subexponential algorithm with the same output as 𝒜 that does not reveal information on the output or the input, therefore we show leaking information on either the input or the output is unavoidable. (2) We consider the impact of timing attacks on (pure) differential privacy mechanisms. It turns out that if the range of the mechanism is unbounded, such as counting, then any time oblivious pure DP mechanism must give a useless output with constant probability (the constant is mechanism dependent) and must have infinite expected running time. We show that up to this limitations it is possible to transform any pure DP mechanism into a time oblivious one.

Cite as

Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik. Resistance to Timing Attacks for Sampling and Privacy Preserving Schemes. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 11:1-11:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bendov_et_al:LIPIcs.FORC.2023.11,
  author =	{Ben Dov, Yoav and David, Liron and Naor, Moni and Tzalik, Elad},
  title =	{{Resistance to Timing Attacks for Sampling and Privacy Preserving Schemes}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{11:1--11:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.11},
  URN =		{urn:nbn:de:0030-drops-179329},
  doi =		{10.4230/LIPIcs.FORC.2023.11},
  annote =	{Keywords: Differential Privacy}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail