Search Results

Documents authored by Korolova, Aleksandra


Document
Differential Privacy Under Multiple Selections

Authors: Ashish Goel, Zhihao Jiang, Aleksandra Korolova, Kamesh Munagala, and Sahasrajit Sarmasarkar

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
We consider the setting where a user with sensitive features wishes to obtain a recommendation from a server in a differentially private fashion. We propose a "multi-selection" architecture where the server can send back multiple recommendations and the user chooses one from these that matches best with their private features. When the user feature is one-dimensional - on an infinite line - and the accuracy measure is defined w.r.t some increasing function 𝔥(.) of the distance on the line, we precisely characterize the optimal mechanism that satisfies differential privacy. The specification of the optimal mechanism includes both the distribution of the noise that the user adds to its private value, and the algorithm used by the server to determine the set of results to send back as a response. We show that Laplace is an optimal noise distribution in this setting. Furthermore, we show that this optimal mechanism results in an error that is inversely proportional to the number of results returned when the function 𝔥(.) is the identity function.

Cite as

Ashish Goel, Zhihao Jiang, Aleksandra Korolova, Kamesh Munagala, and Sahasrajit Sarmasarkar. Differential Privacy Under Multiple Selections. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 8:1-8:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goel_et_al:LIPIcs.FORC.2025.8,
  author =	{Goel, Ashish and Jiang, Zhihao and Korolova, Aleksandra and Munagala, Kamesh and Sarmasarkar, Sahasrajit},
  title =	{{Differential Privacy Under Multiple Selections}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{8:1--8:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.8},
  URN =		{urn:nbn:de:0030-drops-231353},
  doi =		{10.4230/LIPIcs.FORC.2025.8},
  annote =	{Keywords: Differential Privacy, Mechanism Design and Multi-Selection}
}
Document
The Power of Synergy in Differential Privacy: Combining a Small Curator with Local Randomizers

Authors: Amos Beimel, Aleksandra Korolova, Kobbi Nissim, Or Sheffet, and Uri Stemmer

Published in: LIPIcs, Volume 163, 1st Conference on Information-Theoretic Cryptography (ITC 2020)


Abstract
Motivated by the desire to bridge the utility gap between local and trusted curator models of differential privacy for practical applications, we initiate the theoretical study of a hybrid model introduced by "Blender" [Avent et al., USENIX Security '17], in which differentially private protocols of n agents that work in the local-model are assisted by a differentially private curator that has access to the data of m additional users. We focus on the regime where m ≪ n and study the new capabilities of this (m,n)-hybrid model. We show that, despite the fact that the hybrid model adds no significant new capabilities for the basic task of simple hypothesis-testing, there are many other tasks (under a wide range of parameters) that can be solved in the hybrid model yet cannot be solved either by the curator or by the local-users separately. Moreover, we exhibit additional tasks where at least one round of interaction between the curator and the local-users is necessary - namely, no hybrid model protocol without such interaction can solve these tasks. Taken together, our results show that the combination of the local model with a small curator can become part of a promising toolkit for designing and implementing differential privacy.

Cite as

Amos Beimel, Aleksandra Korolova, Kobbi Nissim, Or Sheffet, and Uri Stemmer. The Power of Synergy in Differential Privacy: Combining a Small Curator with Local Randomizers. In 1st Conference on Information-Theoretic Cryptography (ITC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 163, pp. 14:1-14:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{beimel_et_al:LIPIcs.ITC.2020.14,
  author =	{Beimel, Amos and Korolova, Aleksandra and Nissim, Kobbi and Sheffet, Or and Stemmer, Uri},
  title =	{{The Power of Synergy in Differential Privacy: Combining a Small Curator with Local Randomizers}},
  booktitle =	{1st Conference on Information-Theoretic Cryptography (ITC 2020)},
  pages =	{14:1--14:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-151-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{163},
  editor =	{Tauman Kalai, Yael and Smith, Adam D. and Wichs, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2020.14},
  URN =		{urn:nbn:de:0030-drops-121195},
  doi =		{10.4230/LIPIcs.ITC.2020.14},
  annote =	{Keywords: differential privacy, hybrid model, private learning, local model}
}
Document
Preference-Informed Fairness

Authors: Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
In this work, we study notions of fairness in decision-making systems when individuals have diverse preferences over the possible outcomes of the decisions. Our starting point is the seminal work of Dwork et al. [ITCS 2012] which introduced a notion of individual fairness (IF): given a task-specific similarity metric, every pair of individuals who are similarly qualified according to the metric should receive similar outcomes. We show that when individuals have diverse preferences over outcomes, requiring IF may unintentionally lead to less-preferred outcomes for the very individuals that IF aims to protect (e.g. a protected minority group). A natural alternative to IF is the classic notion of fair division, envy-freeness (EF): no individual should prefer another individual’s outcome over their own. Although EF allows for solutions where all individuals receive a highly-preferred outcome, EF may also be overly-restrictive for the decision-maker. For instance, if many individuals agree on the best outcome, then if any individual receives this outcome, they all must receive it, regardless of each individual’s underlying qualifications for the outcome. We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style constraints, but allows for deviations provided they are in line with individuals' preferences. We show that PIIF can permit outcomes that are more favorable to individuals than any IF solution, while providing considerably more flexibility to the decision-maker than EF. In addition, we show how to efficiently optimize any convex objective over the outcomes subject to PIIF for a rich class of individual preferences. Finally, we demonstrate the broad applicability of the PIIF framework by extending our definitions and algorithms to the multiple-task targeted advertising setting introduced by Dwork and Ilvento [ITCS 2019].

Cite as

Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona. Preference-Informed Fairness. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ITCS.2020.16,
  author =	{Kim, Michael P. and Korolova, Aleksandra and Rothblum, Guy N. and Yona, Gal},
  title =	{{Preference-Informed Fairness}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{16:1--16:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.16},
  URN =		{urn:nbn:de:0030-drops-117010},
  doi =		{10.4230/LIPIcs.ITCS.2020.16},
  annote =	{Keywords: algorithmic fairness}
}
Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail