8 Search Results for "Cheu, Albert"


Document
Uniformity Testing Under User-Level Local Privacy

Authors: Clément L. Canonne, Abigail Gentle, and Vikrant Singhal

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
We initiate the study of distribution testing under user-level local differential privacy, where each of n users contributes m samples from the unknown underlying distribution. This setting, albeit very natural, is significantly more challenging than the usual locally private setting, as for the same parameter ε the privacy guarantee must now apply to a full batch of m data points. While some recent work considers distribution learning in this user-level setting, nothing was known for even the most fundamental testing task, uniformity testing (and its generalization, identity testing). We address this gap, by providing (nearly) sample-optimal user-level LDP algorithms for uniformity and identity testing. Motivated by practical considerations, our main focus is on the private-coin, symmetric setting, which does not require users to share a common random seed nor to have been assigned a globally unique identifier.

Cite as

Clément L. Canonne, Abigail Gentle, and Vikrant Singhal. Uniformity Testing Under User-Level Local Privacy. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 33:1-33:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2026.33,
  author =	{Canonne, Cl\'{e}ment L. and Gentle, Abigail and Singhal, Vikrant},
  title =	{{Uniformity Testing Under User-Level Local Privacy}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{33:1--33:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.33},
  URN =		{urn:nbn:de:0030-drops-253201},
  doi =		{10.4230/LIPIcs.ITCS.2026.33},
  annote =	{Keywords: Differential Privacy, Local Differential Privacy, Uniformity Testing, Identity Testing, Hypothesis Testing, User-Level Differential Privacy, Person-Level Differential Privacy}
}
Document
Infinitely Divisible Noise for Differential Privacy: Nearly Optimal Error in the High ε Regime

Authors: Charlie Harrison and Pasin Manurangsi

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Differential privacy (DP) can be achieved in a distributed manner, where multiple parties add independent noise such that their sum protects the overall dataset with DP. A common technique here is for each party to sample their noise from the decomposition of an infinitely divisible distribution. We analyze two mechanisms in this setting: 1) the generalized discrete Laplace (GDL) mechanism, whose distribution (which is closed under summation) follows from differences of i.i.d. negative binomial shares, and 2) the multi-scale discrete Laplace (MSDLap) mechanism, a novel mechanism following the sum of multiple i.i.d. discrete Laplace shares at different scales. For ε ≥ 1, our mechanisms can be parameterized to have O(Δ³ e^{-ε}) and O(min(Δ³ e^{-ε}, Δ² e^{-2ε/3})) MSE, respectively, where Δ denote the sensitivity; the latter bound matches known optimality results. Furthermore, the MSDLap mechanism has the optimal MSE including constants as ε → ∞. We also show a transformation from the discrete setting to the continuous setting, which allows us to transform both mechanisms to the continuous setting and thereby achieve the optimal O(Δ² e^{-2ε / 3}) MSE. To our knowledge, these are the first infinitely divisible additive noise mechanisms that achieve order-optimal MSE under pure DP for either the discrete or continuous setting, so our work shows formally there is no separation in utility when query-independent noise adding mechanisms are restricted to infinitely divisible noise. For the continuous setting, our result improves upon Pagh and Stausholm’s Arete distribution which gives an MSE of O(Δ² e^{-ε/4}) [Pagh and Stausholm, 2022]. Furthermore, we give an exact sampler tuned to efficiently implement the MSDLap mechanism, and we apply our results to improve a state of the art multi-message shuffle DP protocol from [Balle et al., 2020] in the high ε regime.

Cite as

Charlie Harrison and Pasin Manurangsi. Infinitely Divisible Noise for Differential Privacy: Nearly Optimal Error in the High ε Regime. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{harrison_et_al:LIPIcs.FORC.2025.12,
  author =	{Harrison, Charlie and Manurangsi, Pasin},
  title =	{{Infinitely Divisible Noise for Differential Privacy: Nearly Optimal Error in the High \epsilon Regime}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.12},
  URN =		{urn:nbn:de:0030-drops-231396},
  doi =		{10.4230/LIPIcs.FORC.2025.12},
  annote =	{Keywords: Differential Privacy, Distributed Noise Addition}
}
Document
Private Estimation When Data and Privacy Demands Are Correlated

Authors: Syomantak Chaudhuri and Thomas A. Courtade

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Differential Privacy (DP) is the current gold-standard for ensuring privacy for statistical queries. Estimation problems under DP constraints appearing in the literature have largely focused on providing equal privacy to all users. We consider the problems of empirical mean estimation for univariate data and frequency estimation for categorical data, both subject to heterogeneous privacy constraints. Each user, contributing a sample to the dataset, is allowed to have a different privacy demand. The dataset itself is assumed to be worst-case and we study both problems under two different formulations - first, where privacy demands and data may be correlated, and second, where correlations are weakened by random permutation of the dataset. We establish theoretical performance guarantees for our proposed algorithms, under both PAC error and mean-squared error. These performance guarantees translate to minimax optimality in several instances, and experiments confirm superior performance of our algorithms over other baseline techniques.

Cite as

Syomantak Chaudhuri and Thomas A. Courtade. Private Estimation When Data and Privacy Demands Are Correlated. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chaudhuri_et_al:LIPIcs.FORC.2025.3,
  author =	{Chaudhuri, Syomantak and Courtade, Thomas A.},
  title =	{{Private Estimation When Data and Privacy Demands Are Correlated}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.3},
  URN =		{urn:nbn:de:0030-drops-231305},
  doi =		{10.4230/LIPIcs.FORC.2025.3},
  annote =	{Keywords: Differential Privacy, Personalized Privacy, Heterogeneous Privacy, Correlations in Privacy}
}
Document
Differential Privacy Under Multiple Selections

Authors: Ashish Goel, Zhihao Jiang, Aleksandra Korolova, Kamesh Munagala, and Sahasrajit Sarmasarkar

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
We consider the setting where a user with sensitive features wishes to obtain a recommendation from a server in a differentially private fashion. We propose a "multi-selection" architecture where the server can send back multiple recommendations and the user chooses one from these that matches best with their private features. When the user feature is one-dimensional - on an infinite line - and the accuracy measure is defined w.r.t some increasing function 𝔥(.) of the distance on the line, we precisely characterize the optimal mechanism that satisfies differential privacy. The specification of the optimal mechanism includes both the distribution of the noise that the user adds to its private value, and the algorithm used by the server to determine the set of results to send back as a response. We show that Laplace is an optimal noise distribution in this setting. Furthermore, we show that this optimal mechanism results in an error that is inversely proportional to the number of results returned when the function 𝔥(.) is the identity function.

Cite as

Ashish Goel, Zhihao Jiang, Aleksandra Korolova, Kamesh Munagala, and Sahasrajit Sarmasarkar. Differential Privacy Under Multiple Selections. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 8:1-8:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goel_et_al:LIPIcs.FORC.2025.8,
  author =	{Goel, Ashish and Jiang, Zhihao and Korolova, Aleksandra and Munagala, Kamesh and Sarmasarkar, Sahasrajit},
  title =	{{Differential Privacy Under Multiple Selections}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{8:1--8:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.8},
  URN =		{urn:nbn:de:0030-drops-231353},
  doi =		{10.4230/LIPIcs.FORC.2025.8},
  annote =	{Keywords: Differential Privacy, Mechanism Design and Multi-Selection}
}
Document
Locally Private Histograms in All Privacy Regimes

Authors: Clément L. Canonne and Abigail Gentle

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
Frequency estimation, a.k.a. histograms, is a workhorse of data analysis, and as such has been thoroughly studied under differentially privacy. In particular, computing histograms in the local model of privacy has been the focus of a fruitful recent line of work, and various algorithms have been proposed, achieving the order-optimal 𝓁_∞ error in the high-privacy (small ε) regime while balancing other considerations such as time- and communication-efficiency. However, to the best of our knowledge, the picture is much less clear when it comes to the medium- or low-privacy regime (large ε), despite its increased relevance in practice. In this paper, we investigate locally private histograms, and the very related distribution learning task, in this medium-to-low privacy regime, and establish near-tight (and somewhat unexpected) bounds on the 𝓁_∞ error achievable. As a direct corollary of our results, we obtain a protocol for histograms in the shuffle model of differential privacy, with accuracy matching previous algorithms but significantly better message and communication complexity. Our theoretical findings emerge from a novel analysis, which appears to improve bounds across the board for the locally private histogram problem. We back our theoretical findings by an empirical comparison of existing algorithms in all privacy regimes, to assess their typical performance and behaviour beyond the worst-case setting.

Cite as

Clément L. Canonne and Abigail Gentle. Locally Private Histograms in All Privacy Regimes. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 25:1-25:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2025.25,
  author =	{Canonne, Cl\'{e}ment L. and Gentle, Abigail},
  title =	{{Locally Private Histograms in All Privacy Regimes}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{25:1--25:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.25},
  URN =		{urn:nbn:de:0030-drops-226532},
  doi =		{10.4230/LIPIcs.ITCS.2025.25},
  annote =	{Keywords: Differential Privacy, Local Differential Privacy, Histograms, Frequency Estimation, Lower Bounds, Maximum Error}
}
Document
Differential Privacy on Trust Graphs

Authors: Badih Ghazi, Ravi Kumar, Pasin Manurangsi, and Serena Wang

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data. Specifically, given a trust graph where vertices correspond to parties and neighbors are mutually trusting, we give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP (where each party trusts no other party). We further study a robust variant where each party trusts all but an unknown subset of at most t of its neighbors (where t is a given parameter), and give an algorithm for this setting. We complement our algorithms with lower bounds, and discuss implications of our work to other tasks in private learning and analytics.

Cite as

Badih Ghazi, Ravi Kumar, Pasin Manurangsi, and Serena Wang. Differential Privacy on Trust Graphs. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 53:1-53:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITCS.2025.53,
  author =	{Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin and Wang, Serena},
  title =	{{Differential Privacy on Trust Graphs}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{53:1--53:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.53},
  URN =		{urn:nbn:de:0030-drops-226816},
  doi =		{10.4230/LIPIcs.ITCS.2025.53},
  annote =	{Keywords: Differential privacy, trust graphs, minimum dominating set, packing number}
}
Document
Necessary Conditions in Multi-Server Differential Privacy

Authors: Albert Cheu and Chao Yan

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We consider protocols where users communicate with multiple servers to perform a computation on the users' data. An adversary exerts semi-honest control over many of the parties but its view is differentially private with respect to honest users. Prior work described protocols that required multiple rounds of interaction or offered privacy against a computationally bounded adversary. Our work presents limitations of non-interactive protocols that offer privacy against unbounded adversaries. We prove that these protocols require exponentially more samples than centrally private counterparts to solve some learning, testing, and estimation tasks. This means sample-efficiency demands interactivity or computational differential privacy, or both.

Cite as

Albert Cheu and Chao Yan. Necessary Conditions in Multi-Server Differential Privacy. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{cheu_et_al:LIPIcs.ITCS.2023.36,
  author =	{Cheu, Albert and Yan, Chao},
  title =	{{Necessary Conditions in Multi-Server Differential Privacy}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.36},
  URN =		{urn:nbn:de:0030-drops-175395},
  doi =		{10.4230/LIPIcs.ITCS.2023.36},
  annote =	{Keywords: Differential Privacy, Parity Learning, Multi-server}
}
Document
Separating Local & Shuffled Differential Privacy via Histograms

Authors: Victor Balcer and Albert Cheu

Published in: LIPIcs, Volume 163, 1st Conference on Information-Theoretic Cryptography (ITC 2020)


Abstract
Recent work in differential privacy has highlighted the shuffled model as a promising avenue to compute accurate statistics while keeping raw data in users' hands. We present a protocol in this model that estimates histograms with error independent of the domain size. This implies an arbitrarily large gap in sample complexity between the shuffled and local models. On the other hand, we show that the models are equivalent when we impose the constraints of pure differential privacy and single-message randomizers.

Cite as

Victor Balcer and Albert Cheu. Separating Local & Shuffled Differential Privacy via Histograms. In 1st Conference on Information-Theoretic Cryptography (ITC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 163, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{balcer_et_al:LIPIcs.ITC.2020.1,
  author =	{Balcer, Victor and Cheu, Albert},
  title =	{{Separating Local \& Shuffled Differential Privacy via Histograms}},
  booktitle =	{1st Conference on Information-Theoretic Cryptography (ITC 2020)},
  pages =	{1:1--1:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-151-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{163},
  editor =	{Tauman Kalai, Yael and Smith, Adam D. and Wichs, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2020.1},
  URN =		{urn:nbn:de:0030-drops-121068},
  doi =		{10.4230/LIPIcs.ITC.2020.1},
  annote =	{Keywords: Differential Privacy, Distributed Protocols, Histograms}
}
  • Refine by Type
  • 8 Document/PDF
  • 6 Document/HTML

  • Refine by Publication Year
  • 1 2026
  • 5 2025
  • 1 2023
  • 1 2020

  • Refine by Author
  • 2 Canonne, Clément L.
  • 2 Cheu, Albert
  • 2 Gentle, Abigail
  • 2 Manurangsi, Pasin
  • 1 Balcer, Victor
  • Show More...

  • Refine by Series/Journal
  • 8 LIPIcs

  • Refine by Classification
  • 5 Security and privacy
  • 3 Security and privacy → Privacy protections
  • 3 Theory of computation → Theory of database privacy and security
  • 2 Security and privacy → Information-theoretic techniques
  • 2 Security and privacy → Usability in security and privacy
  • Show More...

  • Refine by Keyword
  • 7 Differential Privacy
  • 2 Histograms
  • 2 Local Differential Privacy
  • 1 Correlations in Privacy
  • 1 Differential privacy
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail