Search Results

Documents authored by Roth, Aaron


Document
Balanced Filtering via Disclosure-Controlled Proxies

Authors: Siqi Deng, Emily Diana, Michael Kearns, and Aaron Roth

Published in: LIPIcs, Volume 295, 5th Symposium on Foundations of Responsible Computing (FORC 2024)


Abstract
We study the problem of collecting a cohort or set that is balanced with respect to sensitive groups when group membership is unavailable or prohibited from use at deployment time. Specifically, our deployment-time collection mechanism does not reveal significantly more about the group membership of any individual sample than can be ascertained from base rates alone. To do this, we study a learner that can use a small set of labeled data to train a proxy function that can later be used for this filtering or selection task. We then associate the range of the proxy function with sampling probabilities; given a new example, we classify it using our proxy function and then select it with probability corresponding to its proxy classification. Importantly, we require that the proxy classification does not reveal significantly more information about the sensitive group membership of any individual example compared to population base rates alone (i.e., the level of disclosure should be controlled) and show that we can find such a proxy in a sample- and oracle-efficient manner. Finally, we experimentally evaluate our algorithm and analyze its generalization properties.

Cite as

Siqi Deng, Emily Diana, Michael Kearns, and Aaron Roth. Balanced Filtering via Disclosure-Controlled Proxies. In 5th Symposium on Foundations of Responsible Computing (FORC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 295, pp. 4:1-4:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.FORC.2024.4,
  author =	{Deng, Siqi and Diana, Emily and Kearns, Michael and Roth, Aaron},
  title =	{{Balanced Filtering via Disclosure-Controlled Proxies}},
  booktitle =	{5th Symposium on Foundations of Responsible Computing (FORC 2024)},
  pages =	{4:1--4:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-319-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{295},
  editor =	{Rothblum, Guy N.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2024.4},
  URN =		{urn:nbn:de:0030-drops-200872},
  doi =		{10.4230/LIPIcs.FORC.2024.4},
  annote =	{Keywords: Algorithms, Sampling, Ethical/Societal Implications}
}
Document
Online Multivalid Learning: Means, Moments, and Prediction Intervals

Authors: Varun Gupta, Christopher Jung, Georgy Noarov, Mallesh M. Pai, and Aaron Roth

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We present a general, efficient technique for providing contextual predictions that are "multivalid" in various senses, against an online sequence of adversarially chosen examples (x,y). This means that the resulting estimates correctly predict various statistics of the labels y not just marginally - as averaged over the sequence of examples - but also conditionally on x ∈ G for any G belonging to an arbitrary intersecting collection of groups 𝒢. We provide three instantiations of this framework. The first is mean prediction, which corresponds to an online algorithm satisfying the notion of multicalibration from [Hébert-Johnson et al., 2018]. The second is variance and higher moment prediction, which corresponds to an online algorithm satisfying the notion of mean-conditioned moment multicalibration from [Jung et al., 2021]. Finally, we define a new notion of prediction interval multivalidity, and give an algorithm for finding prediction intervals which satisfy it. Because our algorithms handle adversarially chosen examples, they can equally well be used to predict statistics of the residuals of arbitrary point prediction methods, giving rise to very general techniques for quantifying the uncertainty of predictions of black box algorithms, even in an online adversarial setting. When instantiated for prediction intervals, this solves a similar problem as conformal prediction, but in an adversarial environment and with multivalidity guarantees stronger than simple marginal coverage guarantees.

Cite as

Varun Gupta, Christopher Jung, Georgy Noarov, Mallesh M. Pai, and Aaron Roth. Online Multivalid Learning: Means, Moments, and Prediction Intervals. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 82:1-82:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ITCS.2022.82,
  author =	{Gupta, Varun and Jung, Christopher and Noarov, Georgy and Pai, Mallesh M. and Roth, Aaron},
  title =	{{Online Multivalid Learning: Means, Moments, and Prediction Intervals}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{82:1--82:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.82},
  URN =		{urn:nbn:de:0030-drops-156785},
  doi =		{10.4230/LIPIcs.ITCS.2022.82},
  annote =	{Keywords: Uncertainty Estimation, Calibration, Online Learning}
}
Document
Invited Talk
A User Friendly Power Tool for Deriving Online Learning Algorithms (Invited Talk)

Authors: Aaron Roth

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
In this talk, we overview a simple and user friendly framework developed in [Noarov et al., 2021] that can be used to derive online learning algorithms in a number of settings. In the core framework, at every round, an adaptive adversary introduces a new game, consisting of an action space for the learner, an action space for the adversary, and a vector valued objective function that is concave-convex in every coordinate. The learner and the adversary then play in this game. The learner’s goal is to play so as to minimize the maximum coordinate of the cumulative vector-valued loss. The resulting one-shot game is not concave-convex, and so the minimax theorem does not apply. Nevertheless we give a simple algorithm that can compete with the setting in which the adversary must announce their action first, with optimally diminishing regret. We demonstrate the power of our simple framework by using it to derive optimal bounds and algorithms across a variety of domains. This includes no regret learning: we can recover optimal algorithms and bounds for minimizing exernal regret, internal regret, adaptive regret, multigroup regret, subsequence regret, and permutation regret in the sleeping experts setting. It also includes (multi)calibration [Hébert-Johnson et al., 2018] and related notions: we are able to recover recently derived algorithms and bounds for online adversarial multicalibration [Gupta et al., 2021], mean conditioned moment multicalibration [Jung et al., 2021], and prediction interval multivalidity [Gupta et al., 2021]. Finally we use it to derive a new variant of Blackwell’s Approachability Theorem, which we term "Fast Polytope Approachability".

Cite as

Aaron Roth. A User Friendly Power Tool for Deriving Online Learning Algorithms (Invited Talk). In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{roth:LIPIcs.ESA.2021.2,
  author =	{Roth, Aaron},
  title =	{{A User Friendly Power Tool for Deriving Online Learning Algorithms}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.2},
  URN =		{urn:nbn:de:0030-drops-145835},
  doi =		{10.4230/LIPIcs.ESA.2021.2},
  annote =	{Keywords: Online Learning, Multicalibration, Multivalidity, Blackwell Approachability}
}
Document
An Algorithmic Framework for Fairness Elicitation

Authors: Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders. We introduce a framework in which pairs of individuals can be identified as requiring (approximately) equal treatment under a learned model, or requiring ordered treatment such as "applicant Alice should be at least as likely to receive a loan as applicant Bob". We provide a provably convergent and oracle efficient algorithm for learning the most accurate model subject to the elicited fairness constraints, and prove generalization bounds for both accuracy and fairness. This algorithm can also combine the elicited constraints with traditional statistical fairness notions, thus "correcting" or modifying the latter by the former. We report preliminary findings of a behavioral study of our framework using human-subject fairness constraints elicited on the COMPAS criminal recidivism dataset.

Cite as

Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu. An Algorithmic Framework for Fairness Elicitation. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 2:1-2:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.FORC.2021.2,
  author =	{Jung, Christopher and Kearns, Michael and Neel, Seth and Roth, Aaron and Stapleton, Logan and Wu, Zhiwei Steven},
  title =	{{An Algorithmic Framework for Fairness Elicitation}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{2:1--2:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.2},
  URN =		{urn:nbn:de:0030-drops-138701},
  doi =		{10.4230/LIPIcs.FORC.2021.2},
  annote =	{Keywords: Fairness, Fairness Elicitation}
}
Document
Lexicographically Fair Learning: Algorithms and Generalization

Authors: Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, and Saeed Sharifi-Malvajerdi

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short). Informally, given a collection of demographic groups of interest, minimax fairness asks that the error of the group with the highest error be minimized. Lexifairness goes further and asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized, and amongst all of those solutions, the error of the group with the third highest error should be minimized, and so on. Despite its naturalness, correctly defining lexifairness is considerably more subtle than minimax fairness, because of inherent sensitivity to approximation error. We give a notion of approximate lexifairness that avoids this issue, and then derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting. When the underlying empirical risk minimization problem absent fairness constraints is convex (as it is, for example, with linear and logistic regression), our algorithms are provably efficient even in the worst case. Finally, we show generalization bounds - approximate lexifairness on the training sample implies approximate lexifairness on the true distribution with high probability. Our ability to prove generalization bounds depends on our choosing definitions that avoid the instability of naive definitions.

Cite as

Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, and Saeed Sharifi-Malvajerdi. Lexicographically Fair Learning: Algorithms and Generalization. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 6:1-6:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{diana_et_al:LIPIcs.FORC.2021.6,
  author =	{Diana, Emily and Gill, Wesley and Globus-Harris, Ira and Kearns, Michael and Roth, Aaron and Sharifi-Malvajerdi, Saeed},
  title =	{{Lexicographically Fair Learning: Algorithms and Generalization}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{6:1--6:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.6},
  URN =		{urn:nbn:de:0030-drops-138748},
  doi =		{10.4230/LIPIcs.FORC.2021.6},
  annote =	{Keywords: Fair Learning, Lexicographic Fairness, Online Learning, Game Theory}
}
Document
Pipeline Interventions

Authors: Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, and Juba Ziani

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We introduce the pipeline intervention problem, defined by a layered directed acyclic graph and a set of stochastic matrices governing transitions between successive layers. The graph is a stylized model for how people from different populations are presented opportunities, eventually leading to some reward. In our model, individuals are born into an initial position (i.e. some node in the first layer of the graph) according to a fixed probability distribution, and then stochastically progress through the graph according to the transition matrices, until they reach a node in the final layer of the graph; each node in the final layer has a reward associated with it. The pipeline intervention problem asks how to best make costly changes to the transition matrices governing people’s stochastic transitions through the graph, subject to a budget constraint. We consider two objectives: social welfare maximization, and a fairness-motivated maximin objective that seeks to maximize the value to the population (starting node) with the least expected value. We consider two variants of the maximin objective that turn out to be distinct, depending on whether we demand a deterministic solution or allow randomization. For each objective, we give an efficient approximation algorithm (an additive FPTAS) for constant width networks. We also tightly characterize the "price of fairness" in our setting: the ratio between the highest achievable social welfare and the social welfare consistent with a maximin optimal solution. Finally we show that for polynomial width networks, even approximating the maximin objective to any constant factor is NP hard, even for networks with constant depth. This shows that the restriction on the width in our positive results is essential.

Cite as

Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, and Juba Ziani. Pipeline Interventions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arunachaleswaran_et_al:LIPIcs.ITCS.2021.8,
  author =	{Arunachaleswaran, Eshwar Ram and Kannan, Sampath and Roth, Aaron and Ziani, Juba},
  title =	{{Pipeline Interventions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.8},
  URN =		{urn:nbn:de:0030-drops-135478},
  doi =		{10.4230/LIPIcs.ITCS.2021.8},
  annote =	{Keywords: Interventions for fairness, fairness in navigating life paths, social welfare, maximin welfare, budget-constrained optimization, hardness of approximation}
}
Document
Complete Volume
LIPIcs, Volume 156, FORC 2020, Complete Volume

Authors: Aaron Roth

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
LIPIcs, Volume 156, FORC 2020, Complete Volume

Cite as

1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 1-190, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Proceedings{roth:LIPIcs.FORC.2020,
  title =	{{LIPIcs, Volume 156, FORC 2020, Complete Volume}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{1--190},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020},
  URN =		{urn:nbn:de:0030-drops-120157},
  doi =		{10.4230/LIPIcs.FORC.2020},
  annote =	{Keywords: LIPIcs, Volume 156, FORC 2020, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Aaron Roth

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 0:i-0:viii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{roth:LIPIcs.FORC.2020.0,
  author =	{Roth, Aaron},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{0:i--0:viii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.0},
  URN =		{urn:nbn:de:0030-drops-120168},
  doi =		{10.4230/LIPIcs.FORC.2020.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
A New Analysis of Differential Privacy’s Generalization Guarantees

Authors: Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We give a new proof of the "transfer theorem" underlying adaptive data analysis: that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and gives structural insights that we expect will be useful elsewhere. We show: 1) that differential privacy ensures that the expectation of any query on the conditional distribution on datasets induced by the transcript of the interaction is close to its expectation on the data distribution, and 2) sample accuracy on its own ensures that any query answer produced by the mechanism is close to the expectation of the query on the conditional distribution. This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the conditional distribution after the mechanism has committed to its answers. The transfer theorem then follows by summing these two bounds, and in particular, avoids the "monitor argument" used to derive high probability bounds in prior work. An upshot of our new proof technique is that the concrete bounds we obtain are substantially better than the best previously known bounds, even though the improvements are in the constants, rather than the asymptotics (which are known to be tight). As we show, our new bounds outperform the naive "sample-splitting" baseline at dramatically smaller dataset sizes compared to the previous state of the art, bringing techniques from this literature closer to practicality.

Cite as

Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld. A New Analysis of Differential Privacy’s Generalization Guarantees. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 31:1-31:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.ITCS.2020.31,
  author =	{Jung, Christopher and Ligett, Katrina and Neel, Seth and Roth, Aaron and Sharifi-Malvajerdi, Saeed and Shenfeld, Moshe},
  title =	{{A New Analysis of Differential Privacy’s Generalization Guarantees}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{31:1--31:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.31},
  URN =		{urn:nbn:de:0030-drops-117165},
  doi =		{10.4230/LIPIcs.ITCS.2020.31},
  annote =	{Keywords: Differential Privacy, Adaptive Data Analysis, Transfer Theorem}
}
Document
Differentially Private Combinatorial Optimization

Authors: Kunal Talwar, Anupam Gupta, Katrina Ligett, Frank McSherry, and Aaron Roth

Published in: Dagstuhl Seminar Proceedings, Volume 9511, Parameterized complexity and approximation algorithms (2010)


Abstract
Consider the following problem: given a metric space, some of whose points are ``clients,'' select a set of at most $k$ facility locations to minimize the average distance from the clients to their nearest facility. This is just the well-studied $k$-median problem, for which many approximation algorithms and hardness results are known. Note that the objective function encourages opening facilities in areas where there are many clients, and given a solution, it is often possible to get a good idea of where the clients are located. This raises the following quandary: what if the locations of the clients are sensitive information that we would like to keep private? emph{Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?} In this paper, we initiate a systematic study of algorithms for discrete optimization problems in the framework of differential privacy (which formalizes the idea of protecting the privacy of individual input elements). We show that many such problems indeed have good approximation algorithms that preserve differential privacy; this is even in cases where it is impossible to preserve cryptographic definitions of privacy while computing any non-trivial approximation to even the emph{value} of an optimal solution, let alone the entire solution. Apart from the $k$-median problem, we consider the problems of vertex and set cover, min-cut, facility location, and Steiner tree, and give approximation algorithms and lower bounds for these problems. We also consider the recently introduced submodular maximization problem, ``Combinatorial Public Projects'' (CPP), shown by Papadimitriou et al. cite{PSS08} to be inapproximable to subpolynomial multiplicative factors by any efficient and emph{truthful} algorithm. We give a differentially private (and hence approximately truthful) algorithm that achieves a logarithmic additive approximation. Joint work with Anupam Gupta, Katrina Ligett, Frank McSherry and Aaron Roth.

Cite as

Kunal Talwar, Anupam Gupta, Katrina Ligett, Frank McSherry, and Aaron Roth. Differentially Private Combinatorial Optimization. In Parameterized complexity and approximation algorithms. Dagstuhl Seminar Proceedings, Volume 9511, pp. 1-31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{talwar_et_al:DagSemProc.09511.6,
  author =	{Talwar, Kunal and Gupta, Anupam and Ligett, Katrina and McSherry, Frank and Roth, Aaron},
  title =	{{Differentially Private Combinatorial Optimization}},
  booktitle =	{Parameterized complexity and approximation algorithms},
  pages =	{1--31},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9511},
  editor =	{Erik D. Demaine and MohammadTaghi Hajiaghayi and D\'{a}niel Marx},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.09511.6},
  URN =		{urn:nbn:de:0030-drops-24986},
  doi =		{10.4230/DagSemProc.09511.6},
  annote =	{Keywords: Differential Privacy, Approximation Algorithms}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail