13 Search Results for "Dwork, Cynthia"


Document
Quantum Automating TC⁰-Frege Is LWE-Hard

Authors: Noel Arteche, Gaia Carenini, and Matthew Gray

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We prove the first hardness results against efficient proof search by quantum algorithms. We show that under Learning with Errors (LWE), the standard lattice-based cryptographic assumption, no quantum algorithm can weakly automate TC⁰-Frege. This extends the line of results of Krajíček and Pudlák (Information and Computation, 1998), Bonet, Pitassi, and Raz (FOCS, 1997), and Bonet, Domingo, Gavaldà, Maciel, and Pitassi (Computational Complexity, 2004), who showed that Extended Frege, TC⁰-Frege and AC⁰-Frege, respectively, cannot be weakly automated by classical algorithms if either the RSA cryptosystem or the Diffie-Hellman key exchange protocol are secure. To the best of our knowledge, this is the first interaction between quantum computation and propositional proof search.

Cite as

Noel Arteche, Gaia Carenini, and Matthew Gray. Quantum Automating TC⁰-Frege Is LWE-Hard. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 15:1-15:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arteche_et_al:LIPIcs.CCC.2024.15,
  author =	{Arteche, Noel and Carenini, Gaia and Gray, Matthew},
  title =	{{Quantum Automating TC⁰-Frege Is LWE-Hard}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{15:1--15:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.15},
  URN =		{urn:nbn:de:0030-drops-204117},
  doi =		{10.4230/LIPIcs.CCC.2024.15},
  annote =	{Keywords: automatability, post-quantum cryptography, feasible interpolation}
}
Document
A Linear Type System for L^p-Metric Sensitivity Analysis

Authors: Victor Sannier and Patrick Baillot

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
When working in optimisation or privacy protection, one may need to estimate the sensitivity of computer programs, i.e., the maximum multiplicative increase in the distance between two inputs and the corresponding two outputs. In particular, differential privacy is a rigorous and widely used notion of privacy that is closely related to sensitivity. Several type systems for sensitivity and differential privacy based on linear logic have been proposed in the literature, starting with the functional language Fuzz. However, they are either limited to certain metrics (L¹ and L^∞), and thus to the associated privacy mechanisms, or they rely on a complex notion of type contexts that does not interact well with operational semantics. We therefore propose a graded linear type system - inspired by Bunched Fuzz [{w}under et al., 2023] - called Plurimetric Fuzz that handles L^p vector metrics (for 1 ≤ p ≤ +∞), uses standard type contexts, gives reasonable bounds on sensitivity, and has good metatheoretical properties. We also provide a denotational semantics in terms of metric complete partial orders, and translation mappings from and to Fuzz.

Cite as

Victor Sannier and Patrick Baillot. A Linear Type System for L^p-Metric Sensitivity Analysis. In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sannier_et_al:LIPIcs.FSCD.2024.12,
  author =	{Sannier, Victor and Baillot, Patrick},
  title =	{{A Linear Type System for L^p-Metric Sensitivity Analysis}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.12},
  URN =		{urn:nbn:de:0030-drops-203412},
  doi =		{10.4230/LIPIcs.FSCD.2024.12},
  annote =	{Keywords: type system, linear logic, sensitivity, vector metrics, differential privacy, lambda-calculus, functional programming, denotational semantics}
}
Document
Track A: Algorithms, Complexity and Games
The Discrepancy of Shortest Paths

Authors: Greg Bodwin, Chengyuan Deng, Jie Gao, Gary Hoppenworth, Jalaj Upadhyay, and Chen Wang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
The hereditary discrepancy of a set system is a quantitative measure of the pseudorandom properties of the system. Roughly speaking, hereditary discrepancy measures how well one can 2-color the elements of the system so that each set contains approximately the same number of elements of each color. Hereditary discrepancy has numerous applications in computational geometry, communication complexity and derandomization. More recently, the hereditary discrepancy of the set system of shortest paths has found applications in differential privacy [Chen et al. SODA 23]. The contribution of this paper is to improve the upper and lower bounds on the hereditary discrepancy of set systems of unique shortest paths in graphs. In particular, we show that any system of unique shortest paths in an undirected weighted graph has hereditary discrepancy O(n^{1/4}), and we construct lower bound examples demonstrating that this bound is tight up to polylog n factors. Our lower bounds hold even for planar graphs and bipartite graphs, and improve a previous lower bound of Ω(n^{1/6}) obtained by applying the trace bound of Chazelle and Lvov [SoCG'00] to a classical point-line system of Erdős. As applications, we improve the lower bound on the additive error for differentially-private all pairs shortest distances from Ω(n^{1/6}) [Chen et al. SODA 23] to Ω̃(n^{1/4}), and we improve the lower bound on additive error for the differentially-private all sets range queries problem to Ω̃(n^{1/4}), which is tight up to polylog n factors [Deng et al. WADS 23].

Cite as

Greg Bodwin, Chengyuan Deng, Jie Gao, Gary Hoppenworth, Jalaj Upadhyay, and Chen Wang. The Discrepancy of Shortest Paths. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 27:1-27:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ICALP.2024.27,
  author =	{Bodwin, Greg and Deng, Chengyuan and Gao, Jie and Hoppenworth, Gary and Upadhyay, Jalaj and Wang, Chen},
  title =	{{The Discrepancy of Shortest Paths}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{27:1--27:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.27},
  URN =		{urn:nbn:de:0030-drops-201705},
  doi =		{10.4230/LIPIcs.ICALP.2024.27},
  annote =	{Keywords: Discrepancy, hereditary discrepancy, shortest paths, differential privacy}
}
Document
Track A: Algorithms, Complexity and Games
On the Cut-Query Complexity of Approximating Max-Cut

Authors: Orestis Plevrakis, Seyoon Ragavan, and S. Matthew Weinberg

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We consider the problem of query-efficient global max-cut on a weighted undirected graph in the value oracle model examined by [Rubinstein et al., 2018]. Graph algorithms in this cut query model and other query models have recently been studied for various other problems such as min-cut, connectivity, bipartiteness, and triangle detection. Max-cut in the cut query model can also be viewed as a natural special case of submodular function maximization: on query S ⊆ V, the oracle returns the total weight of the cut between S and V\S. Our first main technical result is a lower bound stating that a deterministic algorithm achieving a c-approximation for any c > 1/2 requires Ω(n) queries. This uses an extension of the cut dimension to rule out approximation (prior work of [Graur et al., 2020] introducing the cut dimension only rules out exact solutions). Secondly, we provide a randomized algorithm with Õ(n) queries that finds a c-approximation for any c < 1. We achieve this using a query-efficient sparsifier for undirected weighted graphs (prior work of [Rubinstein et al., 2018] holds only for unweighted graphs). To complement these results, for most constants c ∈ (0,1], we nail down the query complexity of achieving a c-approximation, for both deterministic and randomized algorithms (up to logarithmic factors). Analogously to general submodular function maximization in the same model, we observe a phase transition at c = 1/2: we design a deterministic algorithm for global c-approximate max-cut in O(log n) queries for any c < 1/2, and show that any randomized algorithm requires Ω(n/log n) queries to find a c-approximate max-cut for any c > 1/2. Additionally, we show that any deterministic algorithm requires Ω(n²) queries to find an exact max-cut (enough to learn the entire graph).

Cite as

Orestis Plevrakis, Seyoon Ragavan, and S. Matthew Weinberg. On the Cut-Query Complexity of Approximating Max-Cut. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 115:1-115:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{plevrakis_et_al:LIPIcs.ICALP.2024.115,
  author =	{Plevrakis, Orestis and Ragavan, Seyoon and Weinberg, S. Matthew},
  title =	{{On the Cut-Query Complexity of Approximating Max-Cut}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{115:1--115:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.115},
  URN =		{urn:nbn:de:0030-drops-202587},
  doi =		{10.4230/LIPIcs.ICALP.2024.115},
  annote =	{Keywords: query complexity, maximum cut, approximation algorithms, graph sparsification}
}
Document
From the Real Towards the Ideal: Risk Prediction in a Better World

Authors: Cynthia Dwork, Omer Reingold, and Guy N. Rothblum

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Prediction algorithms assign scores in [0,1] to individuals, often interpreted as "probabilities" of a positive outcome, for example, of repaying a loan or succeeding in a job. Success, however, rarely depends only on the individual: it is a function of the individual’s interaction with the environment, past and present. Environments do not treat all demographic groups equally. We initiate the study of corrective transformations τ that map predictors of success in the real world to predictors in a better world. In the language of algorithmic fairness, letting p^* denote the true probabilities of success in the real, unfair, world, we characterize the transformations τ for which it is feasible to find a predictor q̃ that is indistinguishable from τ(p^*). The problem is challenging because we do not have access to probabilities or even outcomes in a better world. Nor do we have access to probabilities p^* in the real world. The only data available for training are outcomes from the real world. We obtain a complete characterization of when it is possible to learn predictors that are indistinguishable from τ(p^*), in the form of a simple-to-state criterion describing necessary and sufficient conditions for doing so. This criterion is inextricably bound with the very existence of uncertainty.

Cite as

Cynthia Dwork, Omer Reingold, and Guy N. Rothblum. From the Real Towards the Ideal: Risk Prediction in a Better World. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 1:1-1:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2023.1,
  author =	{Dwork, Cynthia and Reingold, Omer and Rothblum, Guy N.},
  title =	{{From the Real Towards the Ideal: Risk Prediction in a Better World}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{1:1--1:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.1},
  URN =		{urn:nbn:de:0030-drops-179224},
  doi =		{10.4230/LIPIcs.FORC.2023.1},
  annote =	{Keywords: Algorithmic Fairness, Affirmative Action, Learning, Predictions, Multicalibration, Outcome Indistinguishability}
}
Document
HappyMap : A Generalized Multicalibration Method

Authors: Zhun Deng, Cynthia Dwork, and Linjun Zhang

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Multicalibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class C, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in C. Fairness is captured by incorporating demographic subgroups into the class of functions C. Recent work has shown that, by enriching the class C to incorporate appropriate propensity re-weighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future, target domains {(approximately) captured by the re-weightings.} Formally, multicalibration with respect to C bounds |𝔼_{(x,y)∼D}[c(f(x),x)⋅(f(x)-y)]| for all c ∈ C. In this work, we view the term (f(x)-y) as just one specific mapping, and explore the power of an enriched class of mappings. We propose s-Happy Multicalibration, a generalization of multi-calibration, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification, a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. We give a single HappyMap meta-algorithm that captures all these results, together with a sufficiency condition for its success.

Cite as

Zhun Deng, Cynthia Dwork, and Linjun Zhang. HappyMap : A Generalized Multicalibration Method. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 41:1-41:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.ITCS.2023.41,
  author =	{Deng, Zhun and Dwork, Cynthia and Zhang, Linjun},
  title =	{{HappyMap : A Generalized Multicalibration Method}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{41:1--41:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.41},
  URN =		{urn:nbn:de:0030-drops-175449},
  doi =		{10.4230/LIPIcs.ITCS.2023.41},
  annote =	{Keywords: algorithmic fairness, target-independent learning, transfer learning}
}
Document
Making Decisions Under Outcome Performativity

Authors: Michael P. Kim and Juan C. Perdomo

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Decision-makers often act in response to data-driven predictions, with the goal of achieving favorable outcomes. In such settings, predictions don’t passively forecast the future; instead, predictions actively shape the distribution of outcomes they are meant to predict. This performative prediction setting [Brown et al., 2022] raises new challenges for learning "optimal" decision rules. In particular, existing solution concepts do not address the apparent tension between the goals of forecasting outcomes accurately and steering individuals to achieve desirable outcomes. To contend with this concern, we introduce a new optimality concept - performative omniprediction - adapted from the supervised (non-performative) learning setting [Gopalan et al., 2022]. A performative omnipredictor is a single predictor that simultaneously encodes the optimal decision rule with respect to many possibly-competing objectives. Our main result demonstrates that efficient performative omnipredictors exist, under a natural restriction of performative prediction, which we call outcome performativity. On a technical level, our results follow by carefully generalizing the notion of outcome indistinguishability [Cynthia Dwork et al., 2021] to the outcome performative setting. From an appropriate notion of Performative OI, we recover many consequences known to hold in the supervised setting, such as omniprediction and universal adaptability [Kim et al., 2022].

Cite as

Michael P. Kim and Juan C. Perdomo. Making Decisions Under Outcome Performativity. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 79:1-79:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ITCS.2023.79,
  author =	{Kim, Michael P. and Perdomo, Juan C.},
  title =	{{Making Decisions Under Outcome Performativity}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{79:1--79:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.79},
  URN =		{urn:nbn:de:0030-drops-175824},
  doi =		{10.4230/LIPIcs.ITCS.2023.79},
  annote =	{Keywords: performative prediction, outcome indistinguishability}
}
Document
Improved Generalization Guarantees in Restricted Data Models

Authors: Elbert Du and Cynthia Dwork

Published in: LIPIcs, Volume 218, 3rd Symposium on Foundations of Responsible Computing (FORC 2022)


Abstract
Differential privacy is known to protect against threats to validity incurred due to adaptive, or exploratory, data analysis - even when the analyst adversarially searches for a statistical estimate that diverges from the true value of the quantity of interest on the underlying population. The cost of this protection is the accuracy loss incurred by differential privacy. In this work, inspired by standard models in the genomics literature, we consider data models in which individuals are represented by a sequence of attributes with the property that where distant attributes are only weakly correlated. We show that, under this assumption, it is possible to "re-use" privacy budget on different portions of the data, significantly improving accuracy without increasing the risk of overfitting.

Cite as

Elbert Du and Cynthia Dwork. Improved Generalization Guarantees in Restricted Data Models. In 3rd Symposium on Foundations of Responsible Computing (FORC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 218, pp. 6:1-6:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{du_et_al:LIPIcs.FORC.2022.6,
  author =	{Du, Elbert and Dwork, Cynthia},
  title =	{{Improved Generalization Guarantees in Restricted Data Models}},
  booktitle =	{3rd Symposium on Foundations of Responsible Computing (FORC 2022)},
  pages =	{6:1--6:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-226-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{218},
  editor =	{Celis, L. Elisa},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2022.6},
  URN =		{urn:nbn:de:0030-drops-165299},
  doi =		{10.4230/LIPIcs.FORC.2022.6},
  annote =	{Keywords: Differential Privacy, Adaptive Data Analysis, Transfer Theorem}
}
Document
Metric Learning for Individual Fairness

Authors: Christina Ilvento

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
There has been much discussion concerning how "fairness" should be measured or enforced in classification. Individual Fairness [Dwork et al., 2012], which requires that similar individuals be treated similarly, is a highly appealing definition as it gives strong treatment guarantees for individuals. Unfortunately, the need for a task-specific similarity metric has prevented its use in practice. In this work, we propose a solution to the problem of approximating a metric for Individual Fairness based on human judgments. Our model assumes access to a human fairness arbiter who is free of explicit biases and possesses sufficient domain knowledge to evaluate similarity. Our contributions include definitions for metric approximation relevant for Individual Fairness, constructions for approximations from a limited number of realistic queries to the arbiter on a sample of individuals, and learning procedures to construct hypotheses for metric approximations which generalize to unseen samples under certain assumptions of learnability of distance threshold functions.

Cite as

Christina Ilvento. Metric Learning for Individual Fairness. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 2:1-2:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ilvento:LIPIcs.FORC.2020.2,
  author =	{Ilvento, Christina},
  title =	{{Metric Learning for Individual Fairness}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{2:1--2:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.2},
  URN =		{urn:nbn:de:0030-drops-120183},
  doi =		{10.4230/LIPIcs.FORC.2020.2},
  annote =	{Keywords: metric learning, individual fairness, fair machine learning}
}
Document
Individual Fairness in Pipelines

Authors: Cynthia Dwork, Christina Ilvento, and Meena Jagadeesan

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
It is well understood that a system built from individually fair components may not itself be individually fair. In this work, we investigate individual fairness under pipeline composition. Pipelines differ from ordinary sequential or repeated composition in that individuals may drop out at any stage, and classification in subsequent stages may depend on the remaining "cohort" of individuals. As an example, a company might hire a team for a new project and at a later point promote the highest performer on the team. Unlike other repeated classification settings, where the degree of unfairness degrades gracefully over multiple fair steps, the degree of unfairness in pipelines can be arbitrary, even in a pipeline with just two stages. Guided by a panoply of real-world examples, we provide a rigorous framework for evaluating different types of fairness guarantees for pipelines. We show that naïve auditing is unable to uncover systematic unfairness and that, in order to ensure fairness, some form of dependence must exist between the design of algorithms at different stages in the pipeline. Finally, we provide constructions that permit flexibility at later stages, meaning that there is no need to lock in the entire pipeline at the time that the early stage is constructed.

Cite as

Cynthia Dwork, Christina Ilvento, and Meena Jagadeesan. Individual Fairness in Pipelines. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 7:1-7:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2020.7,
  author =	{Dwork, Cynthia and Ilvento, Christina and Jagadeesan, Meena},
  title =	{{Individual Fairness in Pipelines}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{7:1--7:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.7},
  URN =		{urn:nbn:de:0030-drops-120235},
  doi =		{10.4230/LIPIcs.FORC.2020.7},
  annote =	{Keywords: algorithmic fairness, fairness under composition, pipelines}
}
Document
Abstracting Fairness: Oracles, Metrics, and Interpretability

Authors: Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, and Pragya Sur

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
It is well understood that classification algorithms, for example, for deciding on loan applications, cannot be evaluated for fairness without taking context into account. We examine what can be learned from a fairness oracle equipped with an underlying understanding of "true" fairness. The oracle takes as input a (context, classifier) pair satisfying an arbitrary fairness definition, and accepts or rejects the pair according to whether the classifier satisfies the underlying fairness truth. Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle. Since every "truly fair" classifier induces a coarse metric, in which those receiving the same decision are at distance zero from one another and those receiving different decisions are at distance one, this extraction process provides the basis for ensuring a rough form of metric fairness, also known as individual fairness. Our principal technical result is a higher fidelity extractor under a mild technical constraint on the weak oracle’s conception of fairness. Our framework permits the scenario in which many classifiers, with differing outcomes, may all be considered fair. Our results have implications for interpretablity - a highly desired but poorly defined property of classification systems that endeavors to permit a human arbiter to reject classifiers deemed to be "unfair" or illegitimately derived.

Cite as

Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, and Pragya Sur. Abstracting Fairness: Oracles, Metrics, and Interpretability. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 8:1-8:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2020.8,
  author =	{Dwork, Cynthia and Ilvento, Christina and Rothblum, Guy N. and Sur, Pragya},
  title =	{{Abstracting Fairness: Oracles, Metrics, and Interpretability}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.8},
  URN =		{urn:nbn:de:0030-drops-120247},
  doi =		{10.4230/LIPIcs.FORC.2020.8},
  annote =	{Keywords: Algorithmic fairness, fairness definitions, causality-based fairness, interpretability, individual fairness, metric fairness}
}
Document
Fairness Under Composition

Authors: Cynthia Dwork and Christina Ilvento

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law. Much of the literature considers the case of a single classifier (or scoring function) used once, in isolation. In this work, we initiate the study of the fairness properties of systems composed of algorithms that are fair in isolation; that is, we study fairness under composition. We identify pitfalls of naïve composition and give general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. We focus primarily on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], but also extend our results to a large class of group fairness definitions popular in the recent literature, exhibiting several cases in which group fairness definitions give misleading signals under composition.

Cite as

Cynthia Dwork and Christina Ilvento. Fairness Under Composition. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 33:1-33:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.ITCS.2019.33,
  author =	{Dwork, Cynthia and Ilvento, Christina},
  title =	{{Fairness Under Composition}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{33:1--33:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.33},
  URN =		{urn:nbn:de:0030-drops-101269},
  doi =		{10.4230/LIPIcs.ITCS.2019.33},
  annote =	{Keywords: algorithmic fairness, fairness, fairness under composition}
}
Document
Parallel and Distributed Algorithms (Dagstuhl Seminar 9537)

Authors: Cynthia Dwork, Ernst W. Mayr, and Friedhelm Meyer auf der Heide

Published in: Dagstuhl Seminar Reports. Dagstuhl Seminar Reports, Volume 1 (2021)


Abstract

Cite as

Cynthia Dwork, Ernst W. Mayr, and Friedhelm Meyer auf der Heide. Parallel and Distributed Algorithms (Dagstuhl Seminar 9537). Dagstuhl Seminar Report 125, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (1995)


Copy BibTex To Clipboard

@TechReport{dwork_et_al:DagSemRep.125,
  author =	{Dwork, Cynthia and Mayr, Ernst W. and Meyer auf der Heide, Friedhelm},
  title =	{{Parallel and Distributed Algorithms (Dagstuhl Seminar 9537)}},
  pages =	{1--17},
  ISSN =	{1619-0203},
  year =	{1995},
  type = 	{Dagstuhl Seminar Report},
  number =	{125},
  institution =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemRep.125},
  URN =		{urn:nbn:de:0030-drops-150135},
  doi =		{10.4230/DagSemRep.125},
}
  • Refine by Author
  • 7 Dwork, Cynthia
  • 4 Ilvento, Christina
  • 2 Rothblum, Guy N.
  • 1 Arteche, Noel
  • 1 Baillot, Patrick
  • Show More...

  • Refine by Classification
  • 4 Theory of computation → Machine learning theory
  • 4 Theory of computation → Theory and algorithms for application domains
  • 3 Theory of computation → Design and analysis of algorithms
  • 1 Mathematics of computing → Approximation algorithms
  • 1 Security and privacy → Logic and verification
  • Show More...

  • Refine by Keyword
  • 3 algorithmic fairness
  • 2 differential privacy
  • 2 fairness under composition
  • 2 individual fairness
  • 1 Adaptive Data Analysis
  • Show More...

  • Refine by Type
  • 13 document

  • Refine by Publication Year
  • 4 2024
  • 3 2020
  • 3 2023
  • 1 1995
  • 1 2019
  • Show More...