13 Search Results for "Ligett, Katrina"


Volume

LIPIcs, Volume 192

2nd Symposium on Foundations of Responsible Computing (FORC 2021)

FORC 2021, June 9-11, 2021, Virtual Conference

Editors: Katrina Ligett and Swati Gupta

Document
Complete Volume
LIPIcs, Volume 192, FORC 2021, Complete Volume

Authors: Katrina Ligett and Swati Gupta

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
LIPIcs, Volume 192, FORC 2021, Complete Volume

Cite as

2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 1-152, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{ligett_et_al:LIPIcs.FORC.2021,
  title =	{{LIPIcs, Volume 192, FORC 2021, Complete Volume}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{1--152},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021},
  URN =		{urn:nbn:de:0030-drops-138675},
  doi =		{10.4230/LIPIcs.FORC.2021},
  annote =	{Keywords: LIPIcs, Volume 192, FORC 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Katrina Ligett and Swati Gupta

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 0:i-0:viii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ligett_et_al:LIPIcs.FORC.2021.0,
  author =	{Ligett, Katrina and Gupta, Swati},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{0:i--0:viii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.0},
  URN =		{urn:nbn:de:0030-drops-138680},
  doi =		{10.4230/LIPIcs.FORC.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Privately Answering Counting Queries with Generalized Gaussian Mechanisms

Authors: Arun Ganesh and Jiazheng Zhao

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
We give the first closed-form privacy guarantees for the Generalized Gaussian mechanism (the mechanism that adds noise x to a vector with probability proportional to exp(-(||x||_p/σ)^p) for some σ, p), in the setting of answering k counting (i.e. sensitivity-1) queries about a database with (ε, δ)-differential privacy (in particular, with low 𝓁_∞-error). Just using Generalized Gaussian noise, we obtain a mechanism such that if the true answers to the queries are the vector d, the mechanism outputs answers d̃ with the 𝓁_∞-error guarantee: 𝔼[||d̃ - d||_∞] = O(√{k log log k log(1/δ)}/ε). This matches the error bound of [Steinke and Ullman, 2017], but using a much simpler mechanism. By composing this mechanism with the sparse vector mechanism (generalizing a technique of [Steinke and Ullman, 2017]), we obtain a mechanism improving the √{k log log k} dependence on k to √{k log log log k}, Our main technical contribution is showing that certain powers of Generalized Gaussians, which follow a Generalized Gamma distribution, are sub-gamma. In subsequent work, the optimal 𝓁_∞-error bound of O(√{k log (1/δ)}/ε) has been achieved by [Yuval Dagan and Gil Kur, 2020] and [Badih Ghazi et al., 2020] independently. However, the Generalized Gaussian mechanism has some qualitative advantages over the mechanisms used in these papers which may make it of interest to both practitioners and theoreticians, both in the setting of answering counting queries and more generally.

Cite as

Arun Ganesh and Jiazheng Zhao. Privately Answering Counting Queries with Generalized Gaussian Mechanisms. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 1:1-1:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ganesh_et_al:LIPIcs.FORC.2021.1,
  author =	{Ganesh, Arun and Zhao, Jiazheng},
  title =	{{Privately Answering Counting Queries with Generalized Gaussian Mechanisms}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{1:1--1:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.1},
  URN =		{urn:nbn:de:0030-drops-138698},
  doi =		{10.4230/LIPIcs.FORC.2021.1},
  annote =	{Keywords: Differential privacy, counting queries, Generalized Gaussians}
}
Document
An Algorithmic Framework for Fairness Elicitation

Authors: Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders. We introduce a framework in which pairs of individuals can be identified as requiring (approximately) equal treatment under a learned model, or requiring ordered treatment such as "applicant Alice should be at least as likely to receive a loan as applicant Bob". We provide a provably convergent and oracle efficient algorithm for learning the most accurate model subject to the elicited fairness constraints, and prove generalization bounds for both accuracy and fairness. This algorithm can also combine the elicited constraints with traditional statistical fairness notions, thus "correcting" or modifying the latter by the former. We report preliminary findings of a behavioral study of our framework using human-subject fairness constraints elicited on the COMPAS criminal recidivism dataset.

Cite as

Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu. An Algorithmic Framework for Fairness Elicitation. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 2:1-2:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.FORC.2021.2,
  author =	{Jung, Christopher and Kearns, Michael and Neel, Seth and Roth, Aaron and Stapleton, Logan and Wu, Zhiwei Steven},
  title =	{{An Algorithmic Framework for Fairness Elicitation}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{2:1--2:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.2},
  URN =		{urn:nbn:de:0030-drops-138701},
  doi =		{10.4230/LIPIcs.FORC.2021.2},
  annote =	{Keywords: Fairness, Fairness Elicitation}
}
Document
On the Computational Tractability of a Geographic Clustering Problem Arising in Redistricting

Authors: Vincent Cohen-Addad, Philip N. Klein, Dániel Marx, Archer Wheeler, and Christopher Wolfram

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
Redistricting is the problem of dividing up a state into a given number k of regions (called districts) where the voters in each district are to elect a representative. The three primary criteria are: that each district be connected, that the populations of the districts be equal (or nearly equal), and that the districts are "compact". There are multiple competing definitions of compactness, usually minimizing some quantity. One measure that has been recently been used is number of cut edges. In this formulation of redistricting, one is given atomic regions out of which each district must be built (e.g., in the U.S., census blocks). The populations of the atomic regions are given. Consider the graph with one vertex per atomic region and an edge between atomic regions with a shared boundary of positive length. Define the weight of a vertex to be the population of the corresponding region. A districting plan is a partition of vertices into k pieces so that the parts have nearly equal weights and each part is connected. The districts are considered compact to the extent that the plan minimizes the number of edges crossing between different parts. There are two natural computational problems: find the most compact districting plan, and sample districting plans (possibly under a compactness constraint) uniformly at random. Both problems are NP-hard so we consider restricting the input graph to have branchwidth at most w. (A planar graph’s branchwidth is bounded, for example, by its diameter.) If both k and w are bounded by constants, the problems are solvable in polynomial time. In this paper, we give lower and upper bounds that characterize the complexity of these problems in terms of parameters k and w. For simplicity of notation, assume that each vertex has unit weight. We would ideally like algorithms whose running times are of the form O(f(k,w) n^c) for some constant c independent of k and w (in which case the problems are said to be fixed-parameter tractable with respect to those parameters). We show that, under standard complexity-theoretic assumptions, no such algorithms exist. However, the problems are fixed-parameter tractable with respect to each of these parameters individually: there exist algorithms with running times of the form O(f(k) n^{O(w)}) and O(f(w) n^{k+1}). The first result was previously known. The new one, however, is more relevant to the application to redistricting, at least for coarse instances. Indeed, we have implemented a version of the algorithm and have used to successfully find optimally compact solutions to all redistricting instances for France (except Paris, which operates under different rules) under various population-balance constraints. For these instances, the values for w are modest and the values for k are very small.

Cite as

Vincent Cohen-Addad, Philip N. Klein, Dániel Marx, Archer Wheeler, and Christopher Wolfram. On the Computational Tractability of a Geographic Clustering Problem Arising in Redistricting. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 3:1-3:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cohenaddad_et_al:LIPIcs.FORC.2021.3,
  author =	{Cohen-Addad, Vincent and Klein, Philip N. and Marx, D\'{a}niel and Wheeler, Archer and Wolfram, Christopher},
  title =	{{On the Computational Tractability of a Geographic Clustering Problem Arising in Redistricting}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{3:1--3:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.3},
  URN =		{urn:nbn:de:0030-drops-138718},
  doi =		{10.4230/LIPIcs.FORC.2021.3},
  annote =	{Keywords: redistricting, algorithms, planar graphs, lower bounds}
}
Document
A Possibility in Algorithmic Fairness: Can Calibration and Equal Error Rates Be Reconciled?

Authors: Claire Lazar Reich and Suhas Vijaykumar

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
Decision makers increasingly rely on algorithmic risk scores to determine access to binary treatments including bail, loans, and medical interventions. In these settings, we reconcile two fairness criteria that were previously shown to be in conflict: calibration and error rate equality. In particular, we derive necessary and sufficient conditions for the existence of calibrated scores that yield classifications achieving equal error rates at any given group-blind threshold. We then present an algorithm that searches for the most accurate score subject to both calibration and minimal error rate disparity. Applied to the COMPAS criminal risk assessment tool, we show that our method can eliminate error disparities while maintaining calibration. In a separate application to credit lending, we compare our procedure to the omission of sensitive features and show that it raises both profit and the probability that creditworthy individuals receive loans.

Cite as

Claire Lazar Reich and Suhas Vijaykumar. A Possibility in Algorithmic Fairness: Can Calibration and Equal Error Rates Be Reconciled?. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 4:1-4:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lazarreich_et_al:LIPIcs.FORC.2021.4,
  author =	{Lazar Reich, Claire and Vijaykumar, Suhas},
  title =	{{A Possibility in Algorithmic Fairness: Can Calibration and Equal Error Rates Be Reconciled?}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{4:1--4:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.4},
  URN =		{urn:nbn:de:0030-drops-138727},
  doi =		{10.4230/LIPIcs.FORC.2021.4},
  annote =	{Keywords: fair prediction, impossibility results, screening decisions, classification, calibration, equalized odds, optimal transport, risk scores}
}
Document
Census TopDown: The Impacts of Differential Privacy on Redistricting

Authors: Aloni Cohen, Moon Duchin, JN Matthews, and Bhushan Suwal

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
The 2020 Decennial Census will be released with a new disclosure avoidance system in place, putting differential privacy in the spotlight for a wide range of data users. We consider several key applications of Census data in redistricting, developing tools and demonstrations for practitioners who are concerned about the impacts of this new noising algorithm called TopDown. Based on a close look at reconstructed Texas data, we find reassuring evidence that TopDown will not threaten the ability to produce districts with tolerable population balance or to detect signals of racial polarization for Voting Rights Act enforcement.

Cite as

Aloni Cohen, Moon Duchin, JN Matthews, and Bhushan Suwal. Census TopDown: The Impacts of Differential Privacy on Redistricting. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.FORC.2021.5,
  author =	{Cohen, Aloni and Duchin, Moon and Matthews, JN and Suwal, Bhushan},
  title =	{{Census TopDown: The Impacts of Differential Privacy on Redistricting}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.5},
  URN =		{urn:nbn:de:0030-drops-138736},
  doi =		{10.4230/LIPIcs.FORC.2021.5},
  annote =	{Keywords: Census, TopDown, differential privacy, redistricting, Voting Rights Act}
}
Document
Lexicographically Fair Learning: Algorithms and Generalization

Authors: Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, and Saeed Sharifi-Malvajerdi

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short). Informally, given a collection of demographic groups of interest, minimax fairness asks that the error of the group with the highest error be minimized. Lexifairness goes further and asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized, and amongst all of those solutions, the error of the group with the third highest error should be minimized, and so on. Despite its naturalness, correctly defining lexifairness is considerably more subtle than minimax fairness, because of inherent sensitivity to approximation error. We give a notion of approximate lexifairness that avoids this issue, and then derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting. When the underlying empirical risk minimization problem absent fairness constraints is convex (as it is, for example, with linear and logistic regression), our algorithms are provably efficient even in the worst case. Finally, we show generalization bounds - approximate lexifairness on the training sample implies approximate lexifairness on the true distribution with high probability. Our ability to prove generalization bounds depends on our choosing definitions that avoid the instability of naive definitions.

Cite as

Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, and Saeed Sharifi-Malvajerdi. Lexicographically Fair Learning: Algorithms and Generalization. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 6:1-6:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{diana_et_al:LIPIcs.FORC.2021.6,
  author =	{Diana, Emily and Gill, Wesley and Globus-Harris, Ira and Kearns, Michael and Roth, Aaron and Sharifi-Malvajerdi, Saeed},
  title =	{{Lexicographically Fair Learning: Algorithms and Generalization}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{6:1--6:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.6},
  URN =		{urn:nbn:de:0030-drops-138748},
  doi =		{10.4230/LIPIcs.FORC.2021.6},
  annote =	{Keywords: Fair Learning, Lexicographic Fairness, Online Learning, Game Theory}
}
Document
Causal Intersectionality and Fair Ranking

Authors: Ke Yang, Joshua R. Loftus, and Julia Stoyanovich

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
In this paper we propose a causal modeling approach to intersectional fairness, and a flexible, task-specific method for computing intersectionally fair rankings. Rankings are used in many contexts, ranging from Web search to college admissions, but causal inference for fair rankings has received limited attention. Additionally, the growing literature on causal fairness has directed little attention to intersectionality. By bringing these issues together in a formal causal framework we make the application of intersectionality in algorithmic fairness explicit, connected to important real world effects and domain knowledge, and transparent about technical limitations. We experimentally evaluate our approach on real and synthetic datasets, exploring its behavior under different structural assumptions.

Cite as

Ke Yang, Joshua R. Loftus, and Julia Stoyanovich. Causal Intersectionality and Fair Ranking. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{yang_et_al:LIPIcs.FORC.2021.7,
  author =	{Yang, Ke and Loftus, Joshua R. and Stoyanovich, Julia},
  title =	{{Causal Intersectionality and Fair Ranking}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.7},
  URN =		{urn:nbn:de:0030-drops-138756},
  doi =		{10.4230/LIPIcs.FORC.2021.7},
  annote =	{Keywords: fairness, intersectionality, ranking, causality}
}
Document
Bounded-Leakage Differential Privacy

Authors: Katrina Ligett, Charlotte Peale, and Omer Reingold

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
We introduce and study a relaxation of differential privacy [Dwork et al., 2006] that accounts for mechanisms that leak some additional, bounded information about the database. We apply this notion to reason about two distinct settings where the notion of differential privacy is of limited use. First, we consider cases, such as in the 2020 US Census [Abowd, 2018], in which some information about the database is released exactly or with small noise. Second, we consider the accumulation of privacy harms for an individual across studies that may not even include the data of this individual. The tools that we develop for bounded-leakage differential privacy allow us reason about privacy loss in these settings, and to show that individuals preserve some meaningful protections.

Cite as

Katrina Ligett, Charlotte Peale, and Omer Reingold. Bounded-Leakage Differential Privacy. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ligett_et_al:LIPIcs.FORC.2020.10,
  author =	{Ligett, Katrina and Peale, Charlotte and Reingold, Omer},
  title =	{{Bounded-Leakage Differential Privacy}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.10},
  URN =		{urn:nbn:de:0030-drops-120265},
  doi =		{10.4230/LIPIcs.FORC.2020.10},
  annote =	{Keywords: differential privacy, applications, privacy, leakage, auxiliary information}
}
Document
A New Analysis of Differential Privacy’s Generalization Guarantees

Authors: Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We give a new proof of the "transfer theorem" underlying adaptive data analysis: that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and gives structural insights that we expect will be useful elsewhere. We show: 1) that differential privacy ensures that the expectation of any query on the conditional distribution on datasets induced by the transcript of the interaction is close to its expectation on the data distribution, and 2) sample accuracy on its own ensures that any query answer produced by the mechanism is close to the expectation of the query on the conditional distribution. This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the conditional distribution after the mechanism has committed to its answers. The transfer theorem then follows by summing these two bounds, and in particular, avoids the "monitor argument" used to derive high probability bounds in prior work. An upshot of our new proof technique is that the concrete bounds we obtain are substantially better than the best previously known bounds, even though the improvements are in the constants, rather than the asymptotics (which are known to be tight). As we show, our new bounds outperform the naive "sample-splitting" baseline at dramatically smaller dataset sizes compared to the previous state of the art, bringing techniques from this literature closer to practicality.

Cite as

Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld. A New Analysis of Differential Privacy’s Generalization Guarantees. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 31:1-31:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.ITCS.2020.31,
  author =	{Jung, Christopher and Ligett, Katrina and Neel, Seth and Roth, Aaron and Sharifi-Malvajerdi, Saeed and Shenfeld, Moshe},
  title =	{{A New Analysis of Differential Privacy’s Generalization Guarantees}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{31:1--31:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.31},
  URN =		{urn:nbn:de:0030-drops-117165},
  doi =		{10.4230/LIPIcs.ITCS.2020.31},
  annote =	{Keywords: Differential Privacy, Adaptive Data Analysis, Transfer Theorem}
}
Document
Differentially Private Combinatorial Optimization

Authors: Kunal Talwar, Anupam Gupta, Katrina Ligett, Frank McSherry, and Aaron Roth

Published in: Dagstuhl Seminar Proceedings, Volume 9511, Parameterized complexity and approximation algorithms (2010)


Abstract
Consider the following problem: given a metric space, some of whose points are ``clients,'' select a set of at most $k$ facility locations to minimize the average distance from the clients to their nearest facility. This is just the well-studied $k$-median problem, for which many approximation algorithms and hardness results are known. Note that the objective function encourages opening facilities in areas where there are many clients, and given a solution, it is often possible to get a good idea of where the clients are located. This raises the following quandary: what if the locations of the clients are sensitive information that we would like to keep private? emph{Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?} In this paper, we initiate a systematic study of algorithms for discrete optimization problems in the framework of differential privacy (which formalizes the idea of protecting the privacy of individual input elements). We show that many such problems indeed have good approximation algorithms that preserve differential privacy; this is even in cases where it is impossible to preserve cryptographic definitions of privacy while computing any non-trivial approximation to even the emph{value} of an optimal solution, let alone the entire solution. Apart from the $k$-median problem, we consider the problems of vertex and set cover, min-cut, facility location, and Steiner tree, and give approximation algorithms and lower bounds for these problems. We also consider the recently introduced submodular maximization problem, ``Combinatorial Public Projects'' (CPP), shown by Papadimitriou et al. cite{PSS08} to be inapproximable to subpolynomial multiplicative factors by any efficient and emph{truthful} algorithm. We give a differentially private (and hence approximately truthful) algorithm that achieves a logarithmic additive approximation. Joint work with Anupam Gupta, Katrina Ligett, Frank McSherry and Aaron Roth.

Cite as

Kunal Talwar, Anupam Gupta, Katrina Ligett, Frank McSherry, and Aaron Roth. Differentially Private Combinatorial Optimization. In Parameterized complexity and approximation algorithms. Dagstuhl Seminar Proceedings, Volume 9511, pp. 1-31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{talwar_et_al:DagSemProc.09511.6,
  author =	{Talwar, Kunal and Gupta, Anupam and Ligett, Katrina and McSherry, Frank and Roth, Aaron},
  title =	{{Differentially Private Combinatorial Optimization}},
  booktitle =	{Parameterized complexity and approximation algorithms},
  pages =	{1--31},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9511},
  editor =	{Erik D. Demaine and MohammadTaghi Hajiaghayi and D\'{a}niel Marx},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09511.6},
  URN =		{urn:nbn:de:0030-drops-24986},
  doi =		{10.4230/DagSemProc.09511.6},
  annote =	{Keywords: Differential Privacy, Approximation Algorithms}
}
  • Refine by Author
  • 5 Ligett, Katrina
  • 4 Roth, Aaron
  • 2 Gupta, Swati
  • 2 Jung, Christopher
  • 2 Kearns, Michael
  • Show More...

  • Refine by Classification
  • 3 Applied computing → Law
  • 3 Applied computing → Voting / election technologies
  • 3 Computing methodologies → Machine learning
  • 3 Computing methodologies → Ranking
  • 3 Computing methodologies → Supervised learning
  • Show More...

  • Refine by Keyword
  • 2 Differential Privacy
  • 2 differential privacy
  • 2 redistricting
  • 1 Adaptive Data Analysis
  • 1 Approximation Algorithms
  • Show More...

  • Refine by Type
  • 12 document
  • 1 volume

  • Refine by Publication Year
  • 10 2021
  • 2 2020
  • 1 2010

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail