2 Search Results for "Sur, Pragya"


Document
Improved Generalization Guarantees in Restricted Data Models

Authors: Elbert Du and Cynthia Dwork

Published in: LIPIcs, Volume 218, 3rd Symposium on Foundations of Responsible Computing (FORC 2022)


Abstract
Differential privacy is known to protect against threats to validity incurred due to adaptive, or exploratory, data analysis - even when the analyst adversarially searches for a statistical estimate that diverges from the true value of the quantity of interest on the underlying population. The cost of this protection is the accuracy loss incurred by differential privacy. In this work, inspired by standard models in the genomics literature, we consider data models in which individuals are represented by a sequence of attributes with the property that where distant attributes are only weakly correlated. We show that, under this assumption, it is possible to "re-use" privacy budget on different portions of the data, significantly improving accuracy without increasing the risk of overfitting.

Cite as

Elbert Du and Cynthia Dwork. Improved Generalization Guarantees in Restricted Data Models. In 3rd Symposium on Foundations of Responsible Computing (FORC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 218, pp. 6:1-6:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{du_et_al:LIPIcs.FORC.2022.6,
  author =	{Du, Elbert and Dwork, Cynthia},
  title =	{{Improved Generalization Guarantees in Restricted Data Models}},
  booktitle =	{3rd Symposium on Foundations of Responsible Computing (FORC 2022)},
  pages =	{6:1--6:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-226-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{218},
  editor =	{Celis, L. Elisa},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2022.6},
  URN =		{urn:nbn:de:0030-drops-165299},
  doi =		{10.4230/LIPIcs.FORC.2022.6},
  annote =	{Keywords: Differential Privacy, Adaptive Data Analysis, Transfer Theorem}
}
Document
Abstracting Fairness: Oracles, Metrics, and Interpretability

Authors: Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, and Pragya Sur

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
It is well understood that classification algorithms, for example, for deciding on loan applications, cannot be evaluated for fairness without taking context into account. We examine what can be learned from a fairness oracle equipped with an underlying understanding of "true" fairness. The oracle takes as input a (context, classifier) pair satisfying an arbitrary fairness definition, and accepts or rejects the pair according to whether the classifier satisfies the underlying fairness truth. Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle. Since every "truly fair" classifier induces a coarse metric, in which those receiving the same decision are at distance zero from one another and those receiving different decisions are at distance one, this extraction process provides the basis for ensuring a rough form of metric fairness, also known as individual fairness. Our principal technical result is a higher fidelity extractor under a mild technical constraint on the weak oracle’s conception of fairness. Our framework permits the scenario in which many classifiers, with differing outcomes, may all be considered fair. Our results have implications for interpretablity - a highly desired but poorly defined property of classification systems that endeavors to permit a human arbiter to reject classifiers deemed to be "unfair" or illegitimately derived.

Cite as

Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, and Pragya Sur. Abstracting Fairness: Oracles, Metrics, and Interpretability. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 8:1-8:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.FORC.2020.8,
  author =	{Dwork, Cynthia and Ilvento, Christina and Rothblum, Guy N. and Sur, Pragya},
  title =	{{Abstracting Fairness: Oracles, Metrics, and Interpretability}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.8},
  URN =		{urn:nbn:de:0030-drops-120247},
  doi =		{10.4230/LIPIcs.FORC.2020.8},
  annote =	{Keywords: Algorithmic fairness, fairness definitions, causality-based fairness, interpretability, individual fairness, metric fairness}
}
  • Refine by Author
  • 2 Dwork, Cynthia
  • 1 Du, Elbert
  • 1 Ilvento, Christina
  • 1 Rothblum, Guy N.
  • 1 Sur, Pragya

  • Refine by Classification
  • 2 Theory of computation → Machine learning theory
  • 1 Theory of computation → Design and analysis of algorithms
  • 1 Theory of computation → Streaming, sublinear and near linear time algorithms

  • Refine by Keyword
  • 1 Adaptive Data Analysis
  • 1 Algorithmic fairness
  • 1 Differential Privacy
  • 1 Transfer Theorem
  • 1 causality-based fairness
  • Show More...

  • Refine by Type
  • 2 document

  • Refine by Publication Year
  • 1 2020
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail