Abstract 1 Introduction 2 Preliminaries and notation 3 Better algorithms in the low-privacy regime 4 Worst-case, information-theoretic lower bounds 5 Amplification by shuffling 6 Discussion and future work References

Locally Private Histograms in All Privacy Regimes

Clément L. Canonne ORCID School of Computer Science, University of Sydney, Australia Abigail Gentle ORCID School of Computer Science, University of Sydney, Australia
Abstract

Frequency estimation, a.k.a. histograms, is a workhorse of data analysis, and as such has been thoroughly studied under differentially privacy. In particular, computing histograms in the local model of privacy has been the focus of a fruitful recent line of work, and various algorithms have been proposed, achieving the order-optimal error in the high-privacy (small ε) regime while balancing other considerations such as time- and communication-efficiency. However, to the best of our knowledge, the picture is much less clear when it comes to the medium- or low-privacy regime (large ε), despite its increased relevance in practice. In this paper, we investigate locally private histograms, and the very related distribution learning task, in this medium-to-low privacy regime, and establish near-tight (and somewhat unexpected) bounds on the error achievable. As a direct corollary of our results, we obtain a protocol for histograms in the shuffle model of differential privacy, with accuracy matching previous algorithms but significantly better message and communication complexity.

Our theoretical findings emerge from a novel analysis, which appears to improve bounds across the board for the locally private histogram problem. We back our theoretical findings by an empirical comparison of existing algorithms in all privacy regimes, to assess their typical performance and behaviour beyond the worst-case setting.

Keywords and phrases:
Differential Privacy, Local Differential Privacy, Histograms, Frequency Estimation, Lower Bounds, Maximum Error
Funding:
Clément L. Canonne: Supported by an ARC DECRA (DE230101329).
Copyright and License:
[Uncaptioned image] © Clément L. Canonne and Abigail Gentle; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Security and privacy
; Security and privacy Usability in security and privacy ; Security and privacy Privacy protections ; Theory of computation Theory of database privacy and security
Related Version:
Full Version: https://arxiv.org/abs/2408.04888
Acknowledgements:
The authors would like to thank Guy Blanc for the proof of Lemma 17, and Albert Cheu for insightful discussions regarding the use of amplification by shuffling (Theorem 25). This work was done in part while the authors were visiting the Simons Institute for the Theory of Computing.
Editors:
Raghu Meka

1 Introduction

Frequency estimation is a fundamental problem of statistics: besides its use for basic surveying, it is also used as a building block in distribution learning, identifying heavy hitters in sparse domains, and regression, correlation or covariance analysis. As such, frequency estimation (and the closely related problem of heavy hitters) have been thoroughly studied in a variety of settings, ranging from streaming to privacy-preserving analytics. We here focus on the latter, and specifically on the local model of differential privacy (LDP), where the data is distributed across a large number of users, and each datum is subject to stringent differential privacy requirements. This question has, over the past decade, received a lot of attention, starting with [8, 28]; as we elaborate in Section 1.1, much is known about locally private frequency estimation, to the point that it may seem the question has been, already fully resolved – both in theory and practice. However, most recent large-scale implementations of local privacy (see, e.g., [5, 18]) must balance many efficiency objectives, including the bandwidth and computational requirements (as, among others, a proxy for energy consumption), and, above all, the estimation accuracy. Keeping this accuracy in check has, in turn, led to the common use of relatively large values of the “privacy parameter” ε, far from the original rule of thumb of ε1.111See, for instance, the list of real-case use of differential privacy maintained here, which includes use cases of LDP along with the correspond stated values of ε: https://desfontain.es/blog/real-world-differential-privacy.html.

In view of the relevance of frequency estimation to privacy-preserving algorithms and deployments, it is crucial that theory be informed by, and applicable to practice; and that constant factors, bounds on achievability, and the behaviour of estimation error in all parameter regimes are well understood. Yet, to the best of our knowledge, most if not all of the previous theoretical work on locally private frequency estimation focuses on the high-privacy ε1 regime, leaving the low-privacy regime by and large uncharted. Addressing this gap in our theoretical understanding is the focus of our work.

The question and setting

We will focus on examining the standard worst-case estimation error, or the -error as it is most relevant to the problem, with special importance for controlling error rates in heavy hitters. Specifically, when estimating the empirical frequencies q=(q1,,qk) over a universe 𝒳 of size k from n users, the (expected) error is given by

𝔼[q^q]=𝔼[max1ik|q^iqi|] (1)

where the expectation is taken over the (possibly randomised) algorithm given as input the users’ data, and q^ is the vector of estimated frequencies. We are interested in the worst-case error over all possible datasets, that is, the supremum of the above quantity over all inputs X(X1,,Xn)𝒳n (equivalently, all q). We will seek algorithms (“protocols”) to minimize this worst-case error under the constraint of (pure) local privacy (see Section 2), parameterized by the privacy parameter ε>0. As in most distributed settings, one can consider several variants depending on whether a common random seed is available to the users (public-coin protocols) or not (private-coin protocols). It is known that for locally private frequency estimation allowing public-coin protocols can provably reduce the communication requirements [3];222Specifically, the cited paper establishes a separation between public- and private- coin protocols, showing that the former can achieve vanishing error even with constant-length communication per user, a setting where the latter must incur Ω(1) error. To complement this, it is known that using public randomness one can always reduce the per-player communication to ε bits [8]. similarly, one can allow some back and forth between users and the central server (interactive protocols). However, the public-coin or interactive settings come at an increased deployment costs, and are often less easy (or even impossible) to implement as they require either broadcasting from the center or sustained two-way communication between the parties. As our focus is chiefly on analyzing the most versatile setting, we hereafter restrict ourselves for our algorithms to the private-coin (non-interactive) setting; however, we mention that our lower bounds apply to the public-coin setting as well.

A closely related question is that of distribution learning under loss, which differs from frequency estimation in that the dataset X is not arbitrary, but instead assumed to be drawn i.i.d. from an unknown probability distribution q over 𝒳: in this sense, distribution learning is an “easier” problem than frequency estimation, and an algorithm for the latter implies one for the former.333There are some subtleties here, but it is worth noting, as a baseline, that absent privacy constraints it is well-known that the error of learning to scales as 1/n with no dependence on the alphabet size k; while frequency estimation can be done with zero error, as each user can simply send their data. Formal definitions, as well as the relation between these two questions, can be found in Section 2. In this paper, we therefore focus on frequency estimation, and will point out the implications for distribution learning as corollaries.

Connection to shuffle privacy

To conclude this subsection, we mention that besides the use of large values of ε in practical deployments of locally private algorithms, which calls for a better understanding of the tasks in this parameter regime, another motivation for studying the “low-privacy regime” comes from the emergence of another model of differential privacy, shuffle privacy [15, 21], and its increasingly widespread adoption. Indeed, it is known that one generic way to obtain shuffle private algorithms is via the so-called “amplification-by-shuffling” technique (see, e.g., [23]), whereby a locally private algorithm with high ε (low privacy) yields a shuffle private protocol with small ε (high privacy), and the same number of messages and communication cost. This makes characterizing the low-privacy regime of locally private algorithms a very consequential question, especially for fundamental tasks such as frequency estimation and the related distribution learning.

1.1 Prior work

Many innovative algorithms have been proposed in recent years for locally private histograms and distribution estimation [22, 3, 14, 34, 33, 25, 24], with a subset focusing on -error.444For distribution estimation, a common error measure in the literature is 1, corresponding to total variation distance. A lower bound of =Ω(logk/ε2n) for frequency estimation under local differential privacy was established by [8] for most regimes of k, n, and small privacy parameter ε; and various LDP protocols asymptotically achieving this bound have been proposed (see Table 1).

Most recently, upper bounds on for frequency estimation were derived in [14], and evaluated empirically in [24]. For large ε, i.e., the “low-privacy regime,” to the best of our knowledge the best current results are (1) a bound of O(logk/εn) on the error rate established in [14], and, (2) an upper bound of O(logk/(eε/2n)+(logk)/n) from [29] (Theorem 2), based on CountSketch: note that this error does not vanish as ε grows.

In [25], it was stated that under most commonly used metrics (,22,1), the error is primary driven by the variance of an algorithm. We show that at least for a close look at sub-Gaussian, and sub-gamma behaviour can better capture the error in low-privacy regimes. Additionally, [14] raise the question of whether the upper bound of O(logk/εn), is tight in the low privacy regime. We show in Theorem 14 that, quite surprisingly, it is not.

Table 1: Selection of best known upper bounds for communication, and 22-error, where the shaded cells are our results ( indicates results not established in the corresponding paper, but that we derive for completeness.) As discussed above, “private-coin” refers to the fact that the users do not require access to a common random seed (typically easier to implement than “public-coin” protocols, where they do). We note that all public-coin protocols mentioned here can be made private-coin at the cost of an O(logk) blowup factor in the communication cost.
Protocol
Private-
coin
Communication error 22 error
RAPPOR k logknmin(1,ε2) keε/2n(eε/21)2
logknmin(ε,ε2)
logkneε/2+logknεlogn
Subset Selection
[33]
keεmax(1,ε) ? keεn(eε1)2
(G)HR
[3, Theorem 7]
logk ? keεn(eε1)2
RHR
[14, Theorem 3.1]
× ? knmin((eε/21)2,eε,2,k)
No name
[14, Theorem 3.1]
× logknmin(ε,ε2,) ke2ε/n(eε/1)2 ()
CountSketch-based
[29]
× max(ε,log1ε) logkn(eε/21)2+logkn keε/2n(eε/21)2
PGR
[24]
logk logkneε+logknεlogn keεn(eε1)2

Histograms have also seen detailed analysis in the shuffle model as both a practical question [26, 27, 15, 16, 9] and as a benchmark for reasoning about the power of the shuffle model [6]. As two points of comparison we highlight [26] which introduced a multiple–message protocol for shuffled histograms achieving expected -error O(logklog(1/δ)/(εn)) with O(k1/100) rounds of communication, and [6] which demonstrated expected maximum error of O(log(1/δ)/(ε2n)) with k+1 rounds. While our results do not achieve the latter bound, we show surprisingly that the former can be achieved in some fairly permissive parameter regimes with only a single message.

1.2 Overview of results

We here summarise our main results, and briefly discuss the underlying techniques. While we focus on two protocols in particular, the techniques themselves are broadly applicable. Our first set of results concerns the error rate achievable by LDP protocols for frequency estimation. We first recall, as a baseline, a general transformation which converts any LDP protocol with optimal error in the high-privacy regime into another LDP protocol with reasonable error in the low-privacy regime, at the cost of a blowup in communication:

Proposition 1 (Informal; see Proposition 9).

Let A be any locally private protocol for frequency estimation with expected error O(logk/(nε2)) for ε1, using bits of communication per user. Then there is a locally private protocol A achieving error

O(logknmin(ε,ε2))

for all ε>0, using ε bits of of communication per user.

We emphasise that this transformation is not new, and mimics an argument found in, e.g., [14]. This general transformation is quite appealing, as it provides (in theory) good performance in the low-privacy regime “for free” given any good enough LDP protocol for the high-privacy one. However, this comes at a price: first, the communication blowup, which could be impractical; second, a loss in constant factors, which while relatively small might still be prohibitive; and, perhaps more importantly, this requires changing the existing algorithm (and as a result the data analysis pipeline), which is often a significant hurdle. Our second result, focusing on one of the earliest, versatile, and (at least in its “vanilla” version) conceptually simple LDP protocols for frequency estimation, RAPPOR [22], shows that this transformation is not actually necessary, and that RAPPOR actually achieves this improved bound without modification:

Theorem 2 (Informal; see Theorem 13).

The (simple version of) RAPPOR achieves expected error

O(logknmin(ε,ε2))

for all ε>0, using k bits of of communication per user.

In comparison, using the generic transformation above on RAPPOR would require ε k-bit messages per user, and, in terms of worst-case theoretical bounds, an expected error worse by a factor 2.04. As such, our first result can be summarized as saying that analyzing (again) an existing algorithm can be better than modifying it – and, quite importantly, that it may not be necessary to change an existing algorithmic pipeline to inherit better guarantees.

The proof of the above result relies on a careful analysis of the expected maximum of sums of Bernoulli random variables, and specifically on a fine-grained analysis of their subgaussian behaviour in the “highly biased” regime. While RAPPOR is particularly amenable to this analysis, we believe that this technique is highly general and applicable to a broad range of LDP protocols, for example those following the general “scheme template” of [4].

Yet, trying to establish optimality of this min(ε,ε2) scaling turned out to be very challenging. And indeed, this is for a good reason: as we show, it is actually possible to achieve significantly better error rate in the low-privacy regime – and, surprisingly, this much better error is again attained by RAPPOR, out-of-the-box:

Theorem 3 (Informal; see Theorem 14).

The (simple version of) RAPPOR achieves expected error

O~(max(logkneε/2,logknε))

for all ε1, using k bits of of communication per user.

The O~ notation here only hides a single logn factor in the second term. As the error is always at most 1, up to this logn factor this is always better than Theorem 2.

The proof of this result requires going beyond the sub-gaussian concentration behaviour alluded to before, and instead analyse the maximum of these sums of Bernoulli random variables as sub-gamma random variables. More precisely, we draw upon very recent work by [17] (see also [10]) on “local Glivenko–Cantelli” results, which provide refined concentration bounds for mean estimation of high-dimensional product distributions – in our case, the distributions over {0,1}k arising from the use of RAPPOR.

The above result is appealing, in that it not only yields better error rate than previously known (or, in many cases, believed to be possible) in the ε1 regime; but also in that it is achieved “for free” by an existing and widely used algorithm. However, it does have one negative aspect: namely, that RAPPOR is, in its standard version, very inefficient from a communication point of view, as it require one k-bit message from each user – far from the ideal O(logk) bits one could hope for. Luckily, as mentioned above, the analysis underlying Theorem 3 is quite general, and applies to a broad range of locally private estimation algorithms. While it does not lead to the improved bound for all such protocols, we show that it applies, for instance, to the recently proposed Projective Geometry Response protocol of Feldman, Nelson, Nguyen, and Talwar [24], whose performance was originally only analyzed for 2 error:

Theorem 4 (Informal; see Theorem 18).

Projective Geometry Response (PGR) achieves expected error

O~(max(logkneε,logknε))

for all ε1, using O(logk) bits of of communication per user.

Note that again this is achieved “out-of-the-box” by an existing algorithm, without any modification! As a result, this inherits all of the features of PGR its authors originally established: crucially, its computational efficiency. We also point out that the stated guarantee is even slightly better than that of RAPPOR, as the first term now features an eε dependence (instead of the eε/2 of Theorem 3).

We then turn to prove optimally of this error rate, and show that this is, up to a logarithmic factor, optimal for any LDP protocol:

Theorem 5 (Informal; see Theorem 24).

Any LDP protocol for frequency estimation must have, in the worst case, expected error

Ω(logknε2)

for ε(0,1], and

Ω(max(logkneε,logknε))

for ε1.

We remark that this lower bound also applies to the “easier” question of distribution learning, as we will state momentarily. The first part of this lower bound, as mentioned before, was already known; the second part is new, and, while not necessarily difficult to show in hindsight, does require significant care in combining inequalities between various information-theoretic quantities to avoid ending up with a vacuous bound. In this sense, our main contribution for the lower bound is to establish it in a self-contained, streamlined fashion, by drawing on the “chi-square contraction” framework of [2]; and – importantly – that it matches the upper bound obtained earlier in both the high- and low-privacy regimes. Another interesting aspect of this lower bound is that all three terms are derived separately, but from the same family of hard instances: a dataset where almost all users hold the same element.

Implications for shuffle privacy

By combining our new results on LDP protocols for histograms in the low-privacy regime with known “amplification-by-shuffling” results, we are able to obtain a simple, robust shuffle (approximate) DP protocol using only one message per user of logarithmic length, while achieving state-of-the-art:

Theorem 6 (Informal; see Theorem 27).

Shuffling the Projective Geometry Response protocol achieves expected error

O(log(k)log(1/δ)nε)

for all ε[Ω(1/n),1], with O(logk) bits of communication and a single message per user.

This matches that of the best known shuffle DP protocols, but with a much lower message and communication complexity. It was, to the best of our knowledge, still open whether achieving these “best of three worlds” guarantees was possible.

We conclude by providing the corollary of our results for distribution learning, using the known connection to frequency estimation (along with the standard lower bound of Ω(1/n) on the error rate for the question, absent privacy constraints):

Corollary 7.

For distribution learning under local privacy constraints, the minmax error achievable is at most

O~(max(logkneε,logknε,1n))

for all ε1, and

O(logknε2)

for ε(0,1]; and this is tight, up to a logarithmic factor (in n). Moreover, the upper bound is attained by PGR, using O(logk) bits of of communication per user.

Organisation

After providing some background and setting notation, we establish our theoretical results (upper bounds on the error rate) in Section 3, followed by information-theoretic lower bounds in Section 4 and applications to shuffle privacy in Section 5. Finally, we discuss our results and potential future work in Section 6. Several proofs and a discussion of empirical analyses – omitted for clarity of exposition – can be found in the full version.

2 Preliminaries and notation

(Local) differential privacy

We first recall the definition of local differential privacy (LDP):

Definition 8 (Locally private randomiser).

An algorithm Q:𝒳𝒴 satisfies ε-differential privacy if for all pairs of inputs x,x𝒳, all sets of outputs S𝒴 are ε-close,

Pr[Q(x)S]eεPr[Q(x)S].

A locally private protocol is typically a pair of algorithms Q:𝒳𝒴, executed by the user on their data, and A:𝒴n𝒳k, executed by the server to transform the randomised outputs into an unbiased estimator for the true quantity. It is well known (see, e.g., [19]) that all LDP Frequency Estimation algorithms have a stochastic matrix representation which maps every element in the input alphabet to a probability distribution over the output alphabet. Furthermore, [30] demonstrated that all optimal mechanisms obey a “binary” property, in that their stochastic matrix only contains probabilities weighted by either eε or 1 (appropriately normalised). Recently another “family of LDP protocols” was introduced by [4] and used in [24], where each input is associated with a set of high probability outputs determined by some useful set system.

Frequency Estimation

Under frequency estimation, the aim is to estimate the empirical frequency of the observations. We denote the dataset of n observations as Xn{x1,,xn} and their empirical frequencies as the probability vector qq(Xn), where qi1nj=1n𝟙{xj=i}. That is, q is an element of the probability simplex,

Δk{p0k:i=1kpi=1}

A frequency estimator over an alphabet [k] is a (possibly randomized) function 𝒜:[k]nΔk, which approximates the true frequencies sufficiently well. The quality of the resulting approximation, denoted q^, is measured through a suitably chosen loss function. Typical choices of distance are the r norms, for r[1,]; in this paper, we will be mostly concerned with the distance, where d(q,q^)=qq^=max1ik|qiq^i|, and consider the expected loss 𝔼[qq^] (where the expectation is over the randomness of the frequency estimator).

Distribution learning (estimation)

Given independent and identically distributed (i.i.d.) samples from an unknown probability distribution p, the goal of distribution learning (density estimation) is to find a distribution p^ that approximates p sufficiently well. As before, the quality of the approximation is typically measured through a suitably chosen loss function, quantifying the distance between the true distribution p and its approximation p^. From the above, one can see that the key difference between frequency and distribution (under the same loss function) is that in the former, observations are arbitrary, while in the latter they are assumed i.i.d. from some common underlying probability distribution. As such, the expected loss also takes into account the randomness of the samples themselves.

Frequency Estimation implies Distribution Estimation

For any given distance measure d we denote the expected sampling error (under loss d) between the empirical histogram p^n from n i.i.d. samples and the true underlying distribution p as Φ(d,k,n):

Φ(d,k,n)=suppΔk𝔼[d(p,p^n)]

Further, denote the optimal worst-case (over p) expected error under loss d from n i.i.d. samples under ε-LDP constraints as ΦprivΦpriv(d,ε,k,n). The “optimal” here refers to quantifying over all locally private distribution estimators. We can analogously express the optimal expected loss when learning the empirical frequencies as φprivφpriv(d,ε,k,n).

Fact 1.

Accurate frequency estimation implies accurate distribution estimation: namely, Φpriv(d,ε,k,n)Φ(d,k,n)+φpriv(d,ε,k,n).

For specific norms this allows us to derive (1) a lower bound on φpriv (locally private frequency estimation) from a lower bound on Φpriv (locally private distribution estimation), and (2) an upper bound on Φpriv from an upper bound on φpriv, recalling that (see, for instance, [31, 13])

Φ(d,k,n)={Θ(kn) when d=1Θ(1n) when d=2,

3 Better algorithms in the low-privacy regime

Although the order-optimal error rates for LDP frequency estimation and distribution learning are well-understood by now in the high-privacy regime, with many distinct algorithms achieving the tight

O(logknε2)

expected error bound, much less is understood about the best achievable error for large, or even medium, values of ε. In this section, we first revisit (and slightly generalize) an idea from [14], which shows how to convert any protocol “optimal in the high-privacy regime” to a related protocol “good enough in the low-privacy regime as well”, at the price of a blowup in communication (Section 3.1). We then focus on the specific example of RAPPOR, showing that – perhaps surprisingly – this simple protocol does already achieve the same bound by without any modification nor communication blowup (Section 3.2). We finally show that for distribution learning, this very same RAPPOR in fact achieves an even better error rate than this, leveraging very recent results on concentration of the empirical mean for high-dimensional distributions, due to [17, 10] (Section 3.3).

3.1 A generic transformation, and a baseline

Here we prove the following generic statement:

Proposition 9.

Suppose there exists a symmetric555i.e., where all users use the same randomiser. LDP protocol A for frequency estimation achieving expected error

(n,k,ε)=O(logknmin(1,ε2))

for ε1, with m messages and bits of communication per user. Then, for every integer L1, there exists an LDP protocol A for frequency estimation achieving expected error

(nmin(ε,L),k,εmin(ε,L))=O(logknmin(L,ε,ε2))

for all ε>0, with mmin(ε,L) messages and min(ε,L) bits of communication per user. Further, if A is a public-coin (resp. private-coin) protocol, then so is A.

The idea behind this result is not new, and is used (in a slightly less general form) in [14, Appendix E.4]. We restate and prove it in this paper in a self-contained form, as we believe it to be of broader interest.

As a direct corollary (setting L=ε), applying the above to Hadamard Response and RAPPOR, for instance, we obtain the following bounds:

Corollary 10.

For every ε>0, there exists a private-coin ε-LDP protocol (namely, a modification of Hadamard Response) for frequency estimation with expected error

O(logknmin(ε,ε2))

using ε messages per user, and logk+O(1) bits per message.

Corollary 11.

For every ε>0, there exists a private-coin ε-LDP protocol (namely, a modification of RAPPOR) for frequency estimation with expected error

O(logknmin(ε,ε2))

using ε messages per user, and k bits per message.

3.2 Tighter analysis for RAPPOR

To do so, we first recall some facts and notation about RAPPOR, which will help with the analysis of its performance. A common simplification of the RAPPOR protocol [4, 3] is to parameterise it by ε as follows.

  1. 1.

    Given input xi[k], one-hot encode it onto a k-bit vector ex s.t. only the x’th bit is 1.

  2. 2.

    Flip each bit independently with probability 1eε/2+1 and send the resulting noisy bit-vector Yi to the server.666This is just the “Permanent Randomised Response” step of RAPPOR parameterised by ε.

  3. 3.

    The server then receives n noisy bit vectors, computes Y¯1ni=1nYi, and estimates

    q^=eε/2+1eε/21Y¯1eε/21𝟏k (2)

    where 𝟏 is the 1-vector of size k.

A standard fact, which we recall here for completeness, is that q^ defined above is an unbiased estimator for the true vector of frequencies q:

Lemma 12 (Expectation of q^).

We have 𝔼[q^]=q.

We then have 𝔼[q^q]=eε/2+1eε/21𝔼[Y¯𝔼[Y¯]], and so to bound the expected error is suffices to bound that of the expected maximum deviation of the Y¯j’s from their mean. Now, each Y¯j is the (normalised) sum of n independent Bernoulli random variables: the standard way to analyse this maximum is to recall that a sum of n independent Bernoullis is a sub-gaussian random variable with parameter777Importantly, note that this may not coincide with the variance of X, although it does in some important cases (e.g., for a Gaussian r.v.). at most n4. Standard results (see, for example [32, Chapter 2]) on the maximum of k (not necessarily independent) sub-Gaussian random variables then give

𝔼[q^q]eε/2+1eε/21logk2n

which indeed behaves as O(logk/(nε2)) for small ε. However, the bound quickly degrades for large ε, and only yields O(logk/n) (no dependence on the privacy parameter at all!) as ε grows.

To remedy this, we will need a tighter analysis of the subgaussian parameter of Bernoulli random variables in the “very biased” regime (which is the one we have to handle for large ε, as then 1eε/2+10). Specifically, we will rely on the following characterisation of the sub-gaussian norm σ2(p) of a (centered) Bern(p) random variable, known as the Kearns–Saul inequality, and which can be found in, e.g., [12]:

σ2(p)={0,p{0,1}14,p=122p12logp1p (3)

Importantly, this expression is symmetric: σ2(p)=σ2(1p) for all p[0,1]. In our setting, each Yj is the sum of n independent Bernoulli random variables with one of two symmetric parameters, p1eε/2+1 or 1p=eε/2eε/2+1. The sub-Gaussian norm at play is then σ2(p)=eε/21(eε/2+1)ε. Using sub-additivity of the sub-gaussian parameter for independent random variables, we can bound the sub-Gaussian norm of Yj as

σ2(Yj)(eε/21)(eε/2+1)nε (4)

We can use this to bound the maximum error of the estimator.

Theorem 13 (Expected maximum error of RAPPOR).

The expected maximum of q^ is given by

𝔼[q^q]2(eε/2+1)logkn(eε/21)εO(logknmin(ε,ε2)) (5)

proving the main result of this subsection.

3.3 Optimal analysis for RAPPOR

It is natural to wonder whether this min(ε,ε2) dependence on the privacy parameter is order-optimal; especially as it appears in other locally private estimation tasks, such as mean estimation for high-dimensional Gaussians or product distributions [20, 1]. Quite surprisingly, we will show that for frequency estimation the error rate given in Equation 5 is not optimal for large values of ε, and that even the simple RAPPOR algorithm can achieve significantly better.

Theorem 14.

For ε1, the expected error of RAPPOR for frequency estimation satisfies

𝔼[q^q]=O(logkneε/2+logknεlogn).

Importantly, this is better than the bound in Equation 5 whenever n=Ω~(logk/ε), which is the regime of interest (small constant, or vanishing, error). To see why this better bound may hold for large ε, recall that the bound given in Equation 5 relies on analyzing the expected maximum of k (centered) Binomials random variables using their sub-gaussian behavior. This is good when the parameters of the Binomials are not too skewed; however, in the low privacy regimes, the parameters of the Bernoulli summands become very close to 0 (or 1): in that case, to analyze the expected maximum of the Binomials it is tighter to see them as having a sub-gamma behavior. Details follow.

Proof of Theorem 14.

As alluded to above, we want to analyze the expected behavior of the maximum of Binomial random variables beyond the sub-gaussian regime. Invoking generic bounds for sub-gamma random variables such as [11, Corollary 2.6], unfortunately, does not lend itself to the order-optimal bounds either. Instead, we rely on the “local Glivenko–Cantelli” bounds recently obtained by [17, 10], which provide a more refined upper bound: to introduce the result we will invoke, we first need some notation.

Definition 15.

Given μ[0,1], denote by μ~[0,1/2] the sequence defined by μ~i=min(μ,1μ) for all i1, and by μ~ its non-increasing rearrangement. Finally, let pμ denote the product distribution over {0,1} with mean vector μ.

With this in hand, the main result of [17] can be restated as follows:

Theorem 16 ([17, Theorem 3]).

Let n21, and suppose that μ[0,1] is such that limiμ~i=0. Let μ^n=1nj=1nXj[0,1] denote the empirical estimator for μ given n i.i.d. samples X1,,Xn from pμ. Then the following holds:

𝔼[μ^nμ]supi1μ~ilog(i+1)n+lognnsupi1log(i+1)log1μ~i

(Note that [10] recently improved on this results by up to a logn factor in the second term. For simplicity, we use the slightly weaker result of [17], as it is easier to manipulate.)

We want to apply this result to bounding 𝔼[Y¯𝔼[Y¯]]. However, we face one obstacle in doing so, as we do not have a product distribution over {0,1}k: while Y¯1,,Y¯k are indeed independent, and each of the form

Y¯i=1nj=1nYi,j

where Yi,1,,Yi,n are independent Bernoulli random variables, these Bernoullis are not identically distributed: exactly ni=nqi of them have parameter 1p=eε/2eε/2+1, and the remaining nni have parameter p=1eε/2+1. Of course, we do not know the ni’s, as this is what we are trying to estimate; all we have is that

i=1kni=n.

To circumvent this issue, let us write, for each 1ik,

Y¯i=Y¯i++Y¯i

where nY¯i+Bin(ni,1p), nY¯iBin(nni,p) are independent. We can then express

𝔼[Y¯𝔼[Y¯]] =𝔼[max1ik|Y¯i𝔼[Y¯i]|]
=𝔼[max1ik(|Y¯i+𝔼[Y¯i+]|+|Y¯i𝔼[Y¯i]|)]
𝔼[max1ik|Y¯i+𝔼[Y¯i+]|]+𝔼[max1ik|Y¯i𝔼[Y¯i]|] (6)

Now, instead of taking the maximum of k sums of Bernoullis with different parameters but same number of summands (n summands), we take the maximum of k sums of Bernoullis with the same parameter (i.e., Binomials) but different number of summands (at most n). This does not necessarily seem like an improvement, and still does not let us apply Theorem 16 to either of the two expectations. However, if we could argue that “adding summands to each Binomial” cannot decrease the expected maximum, then we would be in good shape: that is, we want to upper bound

𝔼[max1ik|Y¯i+𝔼[Y¯i+]|]

by

𝔼[max1ik|Zi+𝔼[Zi+]|]

where nZi+Bin(n,1p) (instead of Bin(ni,1p)). Intuitively, this seems reasonable, as adding independent summands should make the Binomial more likely to deviate from its expectation. The next lemma makes this intuition rigorous:

Lemma 17.

Fix n1,,nk,m1,,mk and p1,,pk[0,1], with nimi for all i. Let N1,,Nk and M1,,Mk be (not necessarily independent) random variables with NiBin(ni,pi) and MiBin(mi,pi). Then

𝔼[max1ik|Ni𝔼[Ni]|]𝔼[max1ik|Mi𝔼[Mi]|].
Proof.

Set N~iNi𝔼[Ni] and M~iMi𝔼[Mi] for all 1ik. We give a coupling of N~i,M~i such that

𝔼[M~i|N~i]=N~i.

Such a coupling can be obtained by setting M~i=N~i+Δi𝔼[Δi], where ΔiBin(mini,pi) is independent of N~i. Then it is easy to check that M~i has the right distribution, since Ni+ΔiBin(mi,pi); and the the conditional expectation is indeed as claimed. Using this coupling, we obtain

𝔼[max1ik|N~i|] =𝔼[max1ik|𝔼[M~i|N~i]|]
𝔼[𝔼[max1ik|M~i||N~i]] (Jensen’s inequality)
=𝔼[max1ik|M~i|]

establishing the lemma.888More generally, via the existence of this coupling the argument shows that (N~1,,N~k)cx(M~1,,M~k) (domination in the convex order), which in turn is equivalent to having 𝔼[ϕ(N~1,,N~k)]𝔼[ϕ(M~1,,M~k)] for every convex function ϕ. Let Z1+,,Zk+ (resp. Z1,,Zk) be i.i.d. with nZi+Bin(n,1p) random variables (resp. nZiBin(n,p)). Invoking Lemma 17 with m1==mk=n separately on the two expectations of Equation 6, we get

𝔼[Y¯𝔼[Y¯]]𝔼[max1ik|Zi+𝔼[Zi+]|]+𝔼[max1ik|Zi𝔼[Zi]|]

Both of the terms in the RHS now fit the setting of Theorem 16. Moreover, since their parameters are p (for the first expectation) and 1p (for the second), in both case the corresponding mu~i=min(p,1p)=p=1eε/2+1 is the same, and applying the theorem will give the same upper bound for both expectations. Thus, Theorem 16 yields

𝔼[Y¯𝔼[Y¯]]plog(k+1)n+lognnlog(k+1)log1p

Recalling the setting of p, along with the fact that 𝔼[q^q]=eε/2+1eε/21𝔼[Y¯𝔼[Y¯]], finally gives

𝔼[Y¯𝔼[Y¯]] log(k+1)neε/2+1(eε/21)2+lognneε/2+1eε/21log(k+1)log(eε/2+1) (7)

To conclude, observe that the right hand side is O(logknε2+logknεlogn) for ε1, and O(logkneε/2+logknεlogn) for ε1.

3.4 Upper bound for Projective Geometry Response

Projective geometry response (PGR), introduced by [24] achieves optimal rates for 22 error, communication, and near-optimal processing time. The protocol is based on the general template established in [4]: specifically, PGR relies on a set structure defined by projective planes, as detailed next. For a prime power d and k=dt1d1, the authors define a t-dimensional vector space 𝔽dt, where each element x[k] is represented by one of the canonical basis vectors. These basis vectors in turn each uniquely determine a projective plane S(x), such that there are s=|S(x)|=dt11d1 “high probability” elements. Every one of these sets in turn has an intersection with every other set of size c=|S(x)S(x)|=dt21d1. By choosing a prime power deε+1, optimal error is achieved.

Theorem 18.

Projective geometry response (PGR) [24] achieves the optimal rate for error. More specifically, the expected error of Projective Geometry Response for frequency estimation satisfies

𝔼[q^q] 16(2eε+1)2log(k+1)eε(eε1)2n+4(2eε+1)log(k+1)(eε1)εnlogn
O(logkneε+logknεlogn)

We will make use of the size of each subset s and the size of each intersection c as described above, as well as the probability of returning any element y from the output alphabet, given x as an input:

Q(Y=yX=x)={eεseε+ks if yS(x)scseε+ks otherwise. (8)

The estimate q^ of the frequency vector is then given by

q^xα1ni=1n𝟙YiS(x)+β,x[k] (9)

where

α=(eε1)s+k(eε1)(sc),β=(eε1)c+s(eε1)(sc) (10)

so that q^ is an unbiased estimator of the true frequency vector q. Since a prime power can be found within a factor 2 of any number, we can choose d such that

eε+1d2(eε+1) (11)

(While we could instead choose d<eε+1 and set the inequality to be an undershoot by a factor two, an investigation in that direction led to worse constants and trickier analysis.)
Recall that, for every integer ,

d1=(d1)(1+d++d1), (12)

an identity we will rely on extensively in the rest of the section.

 Remark 19.

Note that, as introduced above, we must have k=dt1d1 for a prime power d satisfying (11). Other values of k must be rounded up, losing up to a factor O(eε) in the domain size (i.e., working instead with a domain size k=O(eεk)). We hereafter ignore this detail, which does not affect the final bound of Theorem 18 unless εlogk; and assume that k is of the form stated above. For the same reason, we additionally can assume t3.

Proof of Theorem 18.

We start with two lemmas which will be useful in proving the optimal error rate.

Lemma 20.

The common size s of every subset S(x), x[k], satisfies seε+2.

Proof.

We can rewrite s as

s=dt11d1=i=0t2di

As t3, we have, by applying Equations 11 and 12

s=dt11d1=i=0t2di1+deε+2.

Now remembering that whatever expected error we compute will be multiplied by the normalising constant α, we would like to bound that in terms of ε.

Lemma 21.

The value α defined in (10) satisfies α2+2+deε12+4+2eεeε1.

Proof.

First note that

α=(eε1)s+k(eε1)(sc)=s(sc)+k(eε1)(sc)

and note that by simple application of Equation 12 we get sc=dt2. Applying the identity again we get:

sdt2=i=0t21dit+2=1dt2+1dt1++12 (13)

where the final inequality comes from the fact that with d2 this is bounded by the geometric series. For k we have the same series, with the addition of a single d term.

kdt2=i=0t11dit+2=1dt2+1dt1++1+d

The result is then at most 2+d2+2(eε+1), by applying Equation 11. We are now ready to apply Theorem 16. As we did for RAPPOR, we will break the expected maximum into the sum of error over two vectors of binomials Z+=Bin(n,p+) and Z=Bin(n,p) where, recalling (8),

p+=eεseε+ks,p=scseε+ks (14)

In this case we do not have that p+=1p so we will need to bound the following

𝔼[q^q]α(log(k+1)n(p++p)+log(k+1)nlog(n)(1log(1/p+)+1log(1/p))).

We briefly note that while RAPPOR has clear independence between coordinates, the result of Theorem 16 due to [17] does not require independence; therefore it applies to PGR, and is likely suitable for the analysis of other subset-based protocols.

First we will upper bound p++p:

Lemma 22.

For p+,p defined in (14), we have p++p2/eε, and so p++p4/eε.

Proof.

First, p+ is clearly less than 1s when we remove ks from the denominator, and so is at most 1/(eε+2)1/eε by Lemma 20. For the other term, notice that by Equation 13 we have s/(sc)1, from which,

pscseε1eε

Overall, we get that p++p2/eε. The conclusion follows from the AM-GM inequality, as p++p2(p++p). Next we bound log1(1/p+)+log1(1/p):

Lemma 23.

We have 1log(1/p+)+1log(1/p)2ε.

Proof.

We will proceed in both cases by lower-bounding the denominators. First,

log(1/p)=log(seε+kssc)log(seεsc)log(eε)=ε.

Next,

log(1/p+)=log(seε+kseε) log(s)
log(eε+2) (Lemma 20)
ε

As such adding the reciprocals of both terms give a bound of 2/ε. The only step left to bound the error of PGR is to take the product of these bounds. Proceeding to do so, we arrive at:

𝔼[q^q] 16(2eε+1)2log(k+1)eε(eε1)2n+4(2eε+1)log(k+1)(eε1)εnlogn

which in the low privacy regime gives:

𝔼[q^q]O(logkneε+logknεlogn)

as claimed.

4 Worst-case, information-theoretic lower bounds

We will follow the “chi-squared lower bound” framework of [2] to obtain our information-theoretic lower bounds against non-interactive locally private protocols:

Theorem 24.

Fix any ε>0. Any non-interactive (public- or private-coin) protocol Π for distribution estimation from n users must have minmax expected error

Ω(max(logkn(eε1)2,logkneε,logknε))
Proof.

Suppose there exists a (non-interactive, public- or private-coin) ε-LDP protocol Π for n users which learns any 𝐩 to expected error α when each user gets an independent sample from 𝐩 as input:

𝔼[𝐩𝐩^]α (15)

where 𝐩^ is the output of Π when run on X1,,Xn𝐩.

Now, consider the family 𝒫α={𝐩z}z[k] of probability distributions over [k], where, for z[k], 𝐩z is defined by

𝐩z(x)=14αk+4α𝟙{z=x},x[k] (16)

(that is, 𝐩z is a mixture of the uniform distribution 𝐮k and a point mass on z, with mixture coefficients 14α and 4α). Consider the following process: first, we select Z from [k] uniformly at random, then generate n i.i.d. samples X1,,Xn from 𝐩Z and run Π on these samples, obtaining a hypothesis 𝐩^. We finally set Z^ to be the element of the domain with the largest probability under 𝐩^, i.e.,

Z^=argmaxi[k]𝐩^(i)

(breaking ties arbitrarily). The first claim is that doing so allows one to guess the correct value of Z with high probability: indeed, since the gap between the highest and second highest element of 𝐩Z is 4α, we have

Pr[Z^=Z]Pr[𝐩𝐩^<2α]1𝔼[𝐩Z𝐩^]2α12 (17)

the second-to-last inequality being Markov’s. However, by Fano’s inequality applied to the Markov chain Z𝐩ZXn(Yn,R)𝐩^Z^ (where Xn is the tuple of i.i.d. samples, and (Yn,R) denotes the tuple of n messages, along with the public randomness, resulting from the protocol Π), we get, recalling that Z is chosen uniformly in [k], that

Pr[Z^=Z]I(Z(Yn,R))+log2logk (18)

and so, putting Equations 17 and 18 together, we get

I(Z(Yn,R))12logk4=Ω(logk). (19)

This gives us the first ingredient: a lower bound on I(Z(Yn,R)). For the second, we need to obtain an upper bound on this same mutual information as a function of ε,n,k, and α. To do so, observe first that, by the chain rule

I(Z(Yn,R))=I(ZYnR)+I(ZR)=I(ZYnR)

the second equality follows from the independence of the public randomness R from Z. This is convenient, as the messages Yn=(Y1,,Yn) are independent conditioned on R, and so we get

I(Z(Yn,R))j=1nI(ZYjR)nmax1jnI(ZYjR) (20)

Consider any user j, using locally private randomizer Q=Qj,R:[k]𝒴 (for notational simplicity, we drop afterwards the dependence on j and R). Let 𝐩¯=𝔼Z[𝐩Z] denote the average input distribution (over Z), i.e., the uniform mixture of all 𝐩z’s. Then we can rewrite the mutual information as

I(ZYjR)=𝔼Z[KL(Q𝐩ZQ𝐩¯)] (21)

where, for a given input distribution 𝐩, Q𝐩()=𝔼X𝐩[Q(X)] denotes the output distribution (over 𝒴) induced by the randomizer Q on input X𝐩.

First lower bound (good for small 𝜺).

We then proceed by upperbounding the KL divergence by the χ2 one and unrolling the latter’s definition, getting

I(ZYjR)𝔼Z[χ2(Q𝐩ZQ𝐩¯)] =𝔼Z[y𝒴(𝔼X𝐩Z[Q(yX)]𝔼X𝐩¯[Q(yX)])2𝔼X𝐩¯[Q(yX)]]
=𝔼Z[y𝒴(x[k]Q(yx)(𝐩Z(x)𝐩¯(x)))2x[k]Q(yx)𝐩¯(x)]

Note, observing that, for our choice of 𝒫α, 𝐩¯ is simply the uniform distribution over [k], and that for all x[k] we then have 𝐩Z(x)𝐩¯(x)=4α(𝟙{Z=x}1k). Then, letting Φ(x)=ex1k𝟏kk for x[k], we get

I(ZYjR) 16α2k2𝔼Z[y𝒴(𝔼X𝐮[Q(yX)(𝟙{Z=X}1k)])2𝔼X𝐮[Q(yX)]]
=16α2k2y𝒴𝔼Z[(𝔼X𝐮[Q(yX)(𝟙{Z=X}1k)])2]𝔼X𝐮[Q(yX)]
=16α2ky𝒴i=1k(𝔼X𝐮[Q(yX)(𝟙{i=X}1k)])2𝔼X𝐮[Q(yX)]
=16α2ky𝒴i=1k𝔼X𝐮[Q(yX)Φ(X)i]2𝔼X𝐮[Q(yX)] (22)

Note that, for any fixed 1i<jk, we have

𝔼X𝐮[Φi(X)]=0,𝔼X𝐮[Φi(X)2]=1k(11k),𝔼X𝐮[|Φi(X)|]=2k(11k),

For simplicity, we write 𝔼X[] for 𝔼X𝐮[]. We will use the fact that, Q being ε-LDP, we have

|Q(yx)𝔼X[Q(yX)]|(eε1)𝔼X[Q(yX)] (23)

for every x[k] and y𝒴. With this in hand, starting from (22), we can write

I(ZYjR) 16α2ky𝒴i=1k𝔼X[Q(yX)Φ(X)i]2𝔼X[Q(yX)]64α2(eε1)2.

where the derivation relies on the fact that 𝔼X[Φi(X)]=0, (23), and 𝔼X𝐮[|Φi(X)|]2k (details can be found in the full version). Using this last bound along with Equations 19 and 20, we get that

12logk4I(Z(Yn,R))64α2n(eε1)2, (24)

i.e.,

α182logk4n(eε1)2, (25)

showing the Ω(logknε2) lower bound for small ε.

The second and third lower bounds can be established in an analogous fashion: the details can be found in the full version.

5 Amplification by shuffling

As mentioned in the Introduction, one of the key motivations for studying locally private histogram estimation in the low privacy (high-ε) regime is the implication for histogram estimation in the shuffle model of privacy (see, e.g., [27, 26, 7, 16]), in light of the “plug-and-play” amplification-by-shuffling results allowing to “translate” the former into the latter. Specifically, we will use the following result of Feldman, McMillan, and Talwar:

Theorem 25 ([23, Theorem 3.1]).

For any domain 𝒳, let R:𝒳𝒴 be an εL-DP local randomiser; and let S be the algorithm that given a tuple of n messages y𝒴n, samples a uniform random permutation π over [n] and outputs (yπ(1),,yπ(n)). Then for any δ(0,1] such that εLlogn16log(2/δ), SRn is (ε,δ)-DP, where

εlog(1+8eεL1eεL+1(eεLlog(4/δ)n+eεLn)).

In particular, if εL1 then ε=O(eεLlog(1/δ)/n), and if εL<1 then ε=O(εLlog(1/δ)/n).

This implies the following:

Lemma 26 (Amplification by shuffling).

Fix any δ(0,1], ε(0,1], and n such that ε>16log(4/δ)/n. Then, for

εLlogε2n256log(4/δ)=Θ(logε2nlog(1/δ)),

shuffling the messages of n users using the same εL-LDP randomizer satisfies (robust) (ε,δ)-shuffle differential privacy.

Proof.

Note that for ε,δ as in the statement and εL as defined, we have 0<εLlogn16log(2/δ). Applying Theorem 25, we get (ε,δ) privacy for

ε=log(1+8eεL1eεL+11(ε16+ε2256log(4/δ)ε/16))ε

which proves the statement.

Theorem 27.

For n=Ω(log(1/δ)ε2) and ε(0,1], Shuffled Projective Geometry Response achieves maximum error O(log(k)log(1/δ)nε), with O(logk) bits of communication (and one single message) per user.

Proof.

For n500ε2log4δ, the restriction on ε from Lemma 26 holds, and, setting εL as in the lemma, we also have εL=Ω(1). We then invoke the bound of Theorem 18, focusing on first part of the bound, which dominates when logknlog2nεL2eεL. This leads to an upper bound on the error of the order

logkneεL =log(k)nε2nlog(1/δ)=log(k)log(1/δ)nε

as desired. It only remains to argue that the first term of the bound did, indeed, dominate the error. As mentioned above, this is the case whenever

eεLεL2logknlog2n

for which a weaker, sufficient condition is eεLεL2nlog2n, that is, neεL. But this follows from our setting of εL, such that eεL=ε2256log(4/δ)1n. Comparing this result with the summary of local, shuffle, and central histogram error bounds available in [16, Table 1] shows that with shuffled PGR achieves the best error of any protocol which sends a constant number of messages.

More specifically, focusing on 3 representative known protocols: the only known one-message-per-user protocol, due to [16], achieves much worse error,999And has the same restriction n=Ω(log(1/δ)/ε2) on the parameters. and requires k bits of communication per user; while the protocol of [26], which achieves the same error as Theorem 27, uses kΩ(1) (logk)-bit messages per user. Finally, a protocol of [6] does achieve better error, but at the cost of performing k+1 rounds, each with logk communication per user. Thus, our bounds demonstrates that shuffled PGR achieve state-of-the-art –error with only one message of logk bits.

6 Discussion and future work

The logarithmic factor in the upper bound.

We prove our upper bounds using the result of [17], which is where the logn term appears. The authors conjectured that this term could be removed in general, but follow–up work [10] demonstrated by counter–example that this is not the case. They do, however, show that there is a distribution–dependent interpolation between the term being T(n)/n and T(n)logn/n. This logarithmic factor has implications for understanding which error regime is dominating, given the parameters of the algorithm (ε,k,n). While it is possible to imagine that a careful application of the tools in [10] could resolve this matter in general or for a specific use–case, in the meantime we emphasise that empirical analysis is still crucial.

Tighter analysis of other protocols.

While we analyse RAPPOR which admits a simple analysis due to the independence of coordinates, and Projective Geometry Response which represents the state of the art in low-communication LDP protocols, we believe the tools introduced in this paper are applicable in a very general way to most LDP protocols. Of particular interest would be Subset Selection, which has optimal mutual information between inputs and outputs of the local randomiser, and the two Hadamard Response protocols that we include in our empirical analysis.

Histograms in the shuffle model.

While our shuffle DP result (Theorem 27) yields better error, communication, and number of messages, it does have a limitation on the parameter range, namely n=Ω(log(1/δ)/ε2). It would be interesting to weaken this requirement to match the central DP one (where the dependence on ε is only linear). Moreover, one could hope to further improve the resulting error to the central DP bound (i.e., a min(logk,log(1/δ)) dependence instead of logklog(1/δ)), or, alternatively, prove a matching lower bound separating the two models.

Broader applicability of lower bound techniques.

While we prove lower bounds for the specific case of frequency estimation, the tools used are extremely general and we imagine that their application could provide new lower bounds for a variety of problems in the LDP setting, especially with the low–privacy regime in mind.

References

  • [1] Jayadev Acharya, Clément L. Canonne, Ziteng Sun, and Himanshu Tyagi. Unified lower bounds for interactive high-dimensional estimation under information constraints. In NeurIPS, 2023.
  • [2] Jayadev Acharya, Clément L. Canonne, and Himanshu Tyagi. Inference under information constraints I: Lower bounds from chi-square contraction. Institute of Electrical and Electronics Engineers, 66(12):7835–7855, 2020. doi:10.1109/TIT.2020.3028440.
  • [3] Jayadev Acharya and Ziteng Sun. Communication complexity in locally private distribution estimation and heavy hitters. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 51–60, Long Beach, California, USA, 2019. PMLR. URL: http://proceedings.mlr.press/v97/acharya19c.html.
  • [4] Jayadev Acharya, Ziteng Sun, and Huanyu Zhang. Hadamard response: Estimating distributions privately, efficiently, and with little communication. In Kamalika Chaudhuri and Masashi Sugiyama, editors, Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 1120–1129. PMLR, 2019. URL: http://proceedings.mlr.press/v89/acharya19a.html.
  • [5] Apple Privacy Team. Learning with privacy at scale, 2017. URL: https://machinelearning.apple.com/research/learning-with-privacy-at-scale.
  • [6] Victor Balcer and Albert Cheu. Separating Local & Shuffled Differential Privacy via Histograms, April 2020. arXiv:1911.06879, doi:10.48550/arXiv.1911.06879.
  • [7] Victor Balcer and Albert Cheu. Separating local & shuffled differential privacy via histograms. In ITC, volume 163 of LIPIcs, pages 1:1–1:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.ITC.2020.1.
  • [8] Raef Bassily and Adam Smith. Local, Private, Efficient Protocols for Succinct Histograms. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, pages 127–135, June 2015. doi:10.1145/2746539.2746632.
  • [9] Andrea Bittau, Úlfar Erlingsson, Petros Maniatis, Ilya Mironov, Ananth Raghunathan, David Lie, Mitch Rudominer, Ushasree Kode, Julien Tinnés, and Bernhard Seefeld. Prochlo: Strong privacy for analytics in the crowd. CoRR, abs/1710.00901, 2017. arXiv:1710.00901.
  • [10] Moïse Blanchard and Vaclav Voracek. Tight bounds for local glivenko-cantelli. In Claire Vernade and Daniel Hsu, editors, Proceedings of The 35th International Conference on Algorithmic Learning Theory, volume 237 of Proceedings of Machine Learning Research, pages 179–220. PMLR, February 2024. URL: https://proceedings.mlr.press/v237/blanchard24a.html.
  • [11] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. OUP Oxford, 2013.
  • [12] V. V. Buldygin and K. K. Moskvichova. The sub-Gaussian norm of a binary random variable. Theory of Probability and Mathematical Statistics, 86:33–49, August 2013. doi:10.1090/S0094-9000-2013-00887-4.
  • [13] Clément L. Canonne. A short note on learning discrete distributions, 2020. arXiv:2002.11457.
  • [14] Wei-Ning Chen, Peter Kairouz, and Ayfer Özgür. Breaking the communication-privacy-accuracy trilemma. IEEE Trans. Inf. Theory, 69(2):1261–1281, 2023. doi:10.1109/TIT.2022.3218772.
  • [15] Albert Cheu, Adam D. Smith, Jonathan R. Ullman, David Zeber, and Maxim Zhilyaev. Distributed differential privacy via shuffling. In EUROCRYPT (1), volume 11476 of Lecture Notes in Computer Science, pages 375–403. Springer, 2019. doi:10.1007/978-3-030-17653-2_13.
  • [16] Albert Cheu and Maxim Zhilyaev. Differentially private histograms in the shuffle model from fake users. In SP, pages 440–457. IEEE, 2022. doi:10.1109/SP46214.2022.9833614.
  • [17] Doron Cohen and Aryeh Kontorovich. Local glivenko-cantelli. In COLT, volume 195 of Proceedings of Machine Learning Research, page 715. PMLR, 2023. URL: https://proceedings.mlr.press/v195/cohen23a.html.
  • [18] Graham Cormode, Somesh Jha, Tejas Kulkarni, Ninghui Li, Divesh Srivastava, and Tianhao Wang. Privacy at scale: Local differential privacy in practice. In SIGMOD Conference, pages 1655–1658. ACM, 2018. doi:10.1145/3183713.3197390.
  • [19] John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Local privacy and statistical minimax rates. In 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, pages 429–438. IEEE Computer Society, 2013. doi:10.1109/FOCS.2013.53.
  • [20] John C. Duchi and Ryan Rogers. Lower bounds for locally private estimation via communication complexity. In COLT, volume 99 of Proceedings of Machine Learning Research, pages 1161–1191. PMLR, 2019. URL: http://proceedings.mlr.press/v99/duchi19a.html.
  • [21] Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Shuang Song, Kunal Talwar, and Abhradeep Thakurta. Encode, Shuffle, Analyze Privacy Revisited: Formalizations and Empirical Evaluation. arXiv:2001.03618 [cs], January 2020. arXiv:2001.03618.
  • [22] Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM Conference on Computer and Communications Security, CCS ’14, pages 1054–1067, New York, NY, USA, 2014. ACM. doi:10.1145/2660267.2660348.
  • [23] Vitaly Feldman, Audra McMillan, and Kunal Talwar. Hiding among the clones: A simple and nearly optimal analysis of privacy amplification by shuffling. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, Denver, CO, USA, February 7-10, 2022, pages 954–964. IEEE, 2021. doi:10.1109/FOCS52979.2021.00096.
  • [24] Vitaly Feldman, Jelani Nelson, Huy Nguyen, and Kunal Talwar. Private frequency estimation via projective geometry. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 6418–6433. PMLR, July 2022. URL: https://proceedings.mlr.press/v162/feldman22a.html.
  • [25] Vitaly Feldman and Kunal Talwar. Lossless compression of efficient private local randomizers. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 3208–3219. PMLR, July 2021. URL: https://proceedings.mlr.press/v139/feldman21a.html.
  • [26] Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, and Ameya Velingker. On the power of multiple anonymous messages: Frequency estimation and selection in the shuffle model of differential privacy. In EUROCRYPT (3), volume 12698 of Lecture Notes in Computer Science, pages 463–488. Springer, 2021. doi:10.1007/978-3-030-77883-5_16.
  • [27] Badih Ghazi, Ravi Kumar, Pasin Manurangsi, and Rasmus Pagh. Private counting from anonymous messages: Near-optimal accuracy with vanishing communication overhead. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 3505–3514. PMLR, 2020. URL: http://proceedings.mlr.press/v119/ghazi20a.html.
  • [28] Justin Hsu, Sanjeev Khanna, and Aaron Roth. Distributed private heavy hitters. In ICALP (1), volume 7391 of Lecture Notes in Computer Science, pages 461–472. Springer, 2012. doi:10.1007/978-3-642-31594-7_39.
  • [29] Ziyue Huang, Yuan Qiu, Ke Yi, and Graham Cormode. Frequency estimation under multiparty differential privacy: One-shot and streaming. Proc. VLDB Endow., 15(10):2058–2070, 2022. doi:10.14778/3547305.3547312.
  • [30] Peter Kairouz, Sewoong Oh, and Pramod Viswanath. Extremal mechanisms for local differential privacy. Journal of Machine Learning Research, 17(17):1–51, 2016. URL: http://jmlr.org/papers/v17/15-135.html.
  • [31] Sudeep Kamath, Alon Orlitsky, Dheeraj Pichapati, and Ananda Theertha Suresh. On Learning Distributions from their Samples. In Proceedings of The 28th Conference on Learning Theory, pages 1066–1100. PMLR, June 2015. URL: http://proceedings.mlr.press/v40/Kamath15.html.
  • [32] Martin J. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2019.
  • [33] Shaowei Wang, Liusheng Huang, Pengzhan Wang, Yiwen Nie, Hongli Xu, Wei Yang, Xiang-Yang Li, and Chunming Qiao. Mutual information optimally local private discrete distribution estimation. ArXiV, abs/1607.08025, 2016. arXiv:1607.08025.
  • [34] Min Ye and Alexander Barg. Optimal schemes for discrete distribution estimation under locally differential privacy. Institute of Electrical and Electronics Engineers, 64(8):5662–5676, 2018. doi:10.1109/TIT.2018.2809790.