Abstract 1 Introduction 2 Preliminaries 3 Applying the Multicalibration Theorem to Prove Theorem 1.12 References

Characterizing the Distinguishability of Product Distributions Through Multicalibration

Cassandra Marcussen ORCID Harvard University, Cambridge, MA, USA Aaron Putterman ORCID Harvard University, Cambridge, MA, USA Salil Vadhan ORCID Harvard University, Cambridge, MA, USA
Abstract

Given a sequence of samples x1,,xk promised to be drawn from one of two distributions X0,X1, a well-studied problem in statistics is to decide which distribution the samples are from. Information theoretically, the maximum advantage in distinguishing the two distributions given k samples is captured by the total variation distance between X0k and X1k. However, when we restrict our attention to efficient distinguishers (i.e., small circuits) of these two distributions, exactly characterizing the ability to distinguish X0k and X1k is more involved and less understood.

In this work, we give a general way to reduce bounds on the computational indistinguishability of X0 and X1 to bounds on the information-theoretic indistinguishability of some specific, related variables X~0 and X~1. As a consequence, we prove a new, tight characterization of the number of samples k needed to efficiently distinguish X0k and X1k with constant advantage as

k=Θ(dH2(X~0,X~1)),

which is the inverse of the squared Hellinger distance dH between two distributions X~0 and X~1 that are computationally indistinguishable from X0 and X1. Likewise, our framework can be used to re-derive a result of Halevi and Rabin (TCC 2008) and Geier (TCC 2022), proving nearly-tight bounds on how computational indistinguishability scales with the number of samples for arbitrary product distributions.

At the heart of our work is the use of the Multicalibration Theorem (Hébert-Johnson, Kim, Reingold, Rothblum 2018) in a way inspired by recent work of Casacuberta, Dwork, and Vadhan (STOC 2024). Multicalibration allows us to relate the computational indistinguishability of X0,X1 to the statistical indistinguishability of X~0,X~1 (for lower bounds on k) and construct explicit circuits to distinguish between X~0,X~1 and consequently X0,X1 (for upper bounds on k).

Keywords and phrases:
Multicalibration, computational distinguishability
Funding:
Cassandra Marcussen: Supported in part by an NDSEG fellowship, and by NSF Award 2152413 and a Simons Investigator Award to Madhu Sudan.
Aaron Putterman: Supported in part by the Simons Investigator Awards of Madhu Sudan and Salil Vadhan and NSF Award CCF 2152413.
Salil Vadhan: Supported by a Simons Investigator Award.
Copyright and License:
[Uncaptioned image] © Cassandra Marcussen, Aaron Putterman, and Salil Vadhan; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Circuit complexity
Related Version:
Full Version: https://arxiv.org/abs/2412.03562
Acknowledgements:
This project was inspired by a final project completed by the first two authors during Cynthia Dwork’s course “Topics in Theory for Society: The Theory of Algorithmic Fairness” at Harvard (Spring 2024). We thank Cynthia Dwork for initial discussions regarding multicalibration and its applications to complexity and hardness amplification. We thank the anonymous Eurocrypt and CCC reviewers for their suggestions and feedback. We thank Pranay Tankala for a correction regarding the statement of the Multicalibration Theorem.
Editors:
Srikanth Srinivasan

1 Introduction

Given a sequence of samples x1,,xk promised to be drawn from one of two distributions X0,X1, each defined on a domain 𝒳, a well-studied problem in statistics is to decide which distribution the samples are from. Information theoretically, the maximum advantage in distinguishing X0,X1 given k samples is dTV(X0k,X1k), where Xbk denotes the result of independently sampling k times from Xb, and dTV(p,q)=12x𝒳|p(x)q(x)| denotes the total variation distance. This advantage can then be related to the single-sample relation between X0,X1 in a few ways:

  1. 1.

    An upper bound is given by the inequality:

    dTV(X0k,X1k)1(1dTV(X0,X1))kkdTV(X0,X1). (1)
  2. 2.

    To obtain a good 2-sided bound, we can use the Hellinger distance:

    1ekdH2(X0,X1)dTV(X0k,X1k)2kdH2(X0,X1), (2)

    where the Hellinger distance is defined as

    dH2(X0,X1)=12i𝒳(Pr[X0=i]Pr[X1=i])2.

In particular, Inequality (2) above shows that k=Θ(1/dH2(X0,X1)) samples is both necessary and sufficient for distinguishing X0,X1 with constant advantage. In contrast, observe that Equation 1 in the first point above is actually not tight. To illustrate, we can consider the two distributions below:

  1. 1.

    X0 such that Pr[X0=1]=1/2,Pr[X0=0]=1/2.

  2. 2.

    X1 such that Pr[X1=1]=1/2+ε,Pr[X1=0]=1/2ε.

Equation 1 implies only that dTV(X0k,X1k)kε, and thus does not rule out the possibility of distintiguishing X0,X1 with constant advantage after k=O(1/ε) samples. However, as we see in the Hellinger-distance formulation of Equation 2, in this example k=Θ(1/ε2) samples is both necessary and sufficient to distinguish X0,X1 with constant advantage. Thus, a key advantage in using Hellinger distance to characterize statistical distinguishability is in getting instance-optimal characterizations of the statistical distinguishability for every pair of random variables.

In computer science, this task of distinguishing distributions arises in many domains, from cryptography to computational complexity. However, we typically work with the computational analogue of total variation distance known as computational indistinguishability. We say that X0,X1 are δ-indistinguishable by circuits of size s if for every function f:𝒳{0,1} computable by size s circuits, |x𝒳f(x)(Pr[X0=x]Pr[X1=x])|δ. In this paper, we state most of our results using this concrete security formulation, where s and δ are separate parameters. However, our results can be translated into the traditional asymptotic complexity formulation where s=nω(1), and δ=nω(1) for a security parameter n and unspecified super-polynomial functions nω(1). In fact, we are interested in the setting of weak indistinguishability where δ is non-negligible (δ1/poly(n)) while s remains super-polynomial (snω(1)). We elaborate on the asymptotic formulations of our results in Section 1.2.

The classic hybrid argument of Goldwasser and Micali [6] shows that if X0,X1 are δ-indistinguishable for circuits of size s, then X0k,X1k are kδ-indistinguishable for circuits of size s, which corresponds to the weaker bound in Equation 1 above. However, a more refined understanding of (in)distinguishability under repeated samples has been elusive. Indeed, the work of Halevi and Rabin [9] was the first to show that X0k,X1k are (1(1δ)k+ε)-indistinguishable for circuits of size s=s/poly(k,1/ε) (with subsequent improvements to the parameters being made by [5]). Note that when s=nω(1) and kpoly(n), we can take ε=nω(1) and retain s=nω(1), so this bound agrees with the tighter bound in Equation 1 above up to an additive negligible term.

This bound on computational distinguishability can be viewed as an extension of Equation 1 to the computational setting. Similarly to Equation 1 from the information-theoretic setting, the characterization of [9, 5] is not an instance-optimal characterization of the computational distinguishability of random variables, and a computational analog of Equation 2 is still missing. In this work, we overcome this shortcoming by building on the key technique of multicalibration, as we explain below.

1.1 Main Results

In this work, we give a general way to reduce reasoning about computational indistinguishability to reasoning about the information-theoretic case. This also allows us to deduce as corollaries both the characterization achieved by [9, 5] and a computational analogue of Inequality (2) (with the total variation distance of X0,X1 replaced by the computational indistinguishability of X0,X1 and the Hellinger distance between X0,X1 replaced by a quantity we call the pseudo-Hellinger distance between X0,X1).

Theorem 1.1.

For every pair of random variables X0,X1 over 𝒳, every integer s and every ε>0, there exist random variables X~0,X~1 such that for every k>0,

  1. 1.

    Xb is ε-indistinguishable from X~b by circuits of size s, for each b{0,1}.

  2. 2.

    X0k and X1k are (dTV(X~0k,X~1k)+2kε)-indistinguishable by circuits of size s.

  3. 3.

    X0k and X1k are (dTV(X~0k,X~1k)2kε)-distinguishable by circuits of size s=O(sk/ε6)+poly(k/ε).

  4. 4.

    The statements above also hold with X~1=X1, but s=O(sk/ε12)+poly(k/ε).

 Remark 1.2.

More generally, we can define indistinguishability with respect to an arbitrary class of functions . See the full version of the paper for generalized versions of the different parts of the above statement.

 Remark 1.3.

We can also generalize the above theorem to arbitrary product distributions. We prove this in the full version of the paper.

As elaborated upon in Section 1.3, the above theorem follows from assigning X~0 and X~1 to be a careful “mixture” of X0 and X1. This mixture depends on carefully partitioning the domain space of the random variables using multicalibration. With these random variables, Item 2 follows directly from Item 1 via hybrid argument, so the value of Theorem 1.1 is that it provides not only an upper bound on (in)distinguishability but a matching lower bound, via Item 3. Thinking of s=nω(1),kpoly(n),ε=1/s, we see that up to a negligible additive term (kε), and a polynomial change in circuit size (s vs. s), the computational indistinguishability of X0,X1 under multiple samples is tightly captured by the information-theoretic indistinguishability of X~0,X~1 under multiple samples. In particular, k=Θ(1/dH2(X~0,X~1)) samples are both necessary and sufficient to distinguish X0,X1 with constant advantage. We can abstract this latter corollary as follows:

Definition 1.4.

For random variables X0,X1 over 𝒳, s, ε>0, the (s,ε) pseudo-Hellinger distance between X0 and X1 is the smallest δ such that there exist random variables X~0,X~1 such that:

  1. 1.

    X0 is ε-indistinguishable from X~0 for circuits of size s.

  2. 2.

    X1 is ε-indistinguishable from X~1 for circuits of size s.

  3. 3.

    dH(X~0,X~1)δ.

With this definition in hand, we derive the following characterization of the indistinguishability of X0,X1, which is a computational analogue of Inequality (2):

Theorem 1.5.

If X0,X1 have (s,ε) pseudo-Hellinger distance δ, then for every k:

  1. 1.

    X0k,X1k are 2kδ2+2kε-indistinguishable for circuits of size s.

  2. 2.

    X0k,X1k are (1ekδ22kε) distinguishable by circuits of size s=O(sk/ε6)+poly(k/ε).

Thus, the number of samples needed to distinguish X0,X1 with constant advantage is Θ(1/δ2), where δ is the pseudo-Hellinger distance. As an immediate consequence, just like the traditional notion of Hellinger distance in the statistical distinguishability setting, our notion of pseudo-Hellinger distance gives an instance-optimal characterization of the computational distinguishability of any pairs of random variables. This is a key benefit of our work over prior works [9, 5].

To prove Part (1) of Theorem 1.5, we first observe that, by parts (1) and (2) of Definition 1.4 and a simple hybrid argument, the computational indistinguishability of X0k and X1k is bounded above by the computational indistinguishability of X~0k and X~1k, plus 2kε. Next, the computational indistinguishability of X~0k and X~1k is upper-bounded by the total variation distance between X~0k,X~1k, which by Inequality (2) is bounded above by 2kdH2(X~0,X~1)=2kδ2. This gives us the bound in Part (1) of the theorem.

To prove Part (2) of Theorem 1.5, we use Theorem 1.1 to obtain X~0,X~1 that are ε-indistinguishable from X0,X1 such that the computational distinguishability of X0k,X1k by circuits of size s is at least dTV(X~0k,X~1k)2kε. In turn, Inequality (2) tells us that this is at least 1ekdH2(X~0,X~1)2kε, which is then at least 1ekδ22kε, where δ is the pseudo-Hellinger distance, dH(X~0,X~1) over all X~0,X~1 that are ε-indistinguishable from X0,X1.

Additionally, using Part 4 of Theorem 1.1, we can prove that there exists a X~0 such that k=Θ(1/dH2(X~0,X1)) samples are both necessary and sufficient to distinguish X0,X1 with constant advantage. When X1 is uniform on 𝒳, squared Hellinger distance dH2(X~0,X1) can be related to the Rényi 12-entropy of X~0, defined as follows:

H1/2(X~0)=2log2(i𝒳Pr[X~0=i]).

Specifically, we have

dH2(X~0,X1)=1212(log|𝒳|H1/2(X~0))=Θ(min{log|𝒳|H1/2(X~0),1}).

This allows us to create another suitable abstraction for the case where X1 is uniform, beginning with the following definition of pseudo-Rényi entropy.

Definition 1.6.

For random variable X0 over 𝒳, s, ε>0, the (s,ε) pseudo-Rényi 12-entropy of X0 is the smallest r such that there exists a random variable X~0 such that:

  1. 1.

    X0 is ε-indistinguishable from X~0 for circuits of size s.

  2. 2.

    H1/2(X~0)=r.

This definition yields the following characterization of the indistinguishability of X0 from the uniform distribution:

Theorem 1.7.

If X0 is a distribution over 𝒳 that has (s,ε) pseudo-Rényi 12-entropy r=log|𝒳|δ and 𝒰 is the uniform distribution over 𝒳, then for every k:

  1. 1.

    X0k,𝒰k are O(kδ)+2kε-indistinguishable for circuits of size s.

  2. 2.

    X0k,𝒰k are (1eΩ(kmin{δ,1})2kε) distinguishable by circuits of size s=O(sk/ε12)+poly(k/ε).

The number of samples needed to distinguish X0 from uniform is Θ(1δ) where δ is the gap between the pseudo-Rényi 12-entropy and its maximum possibly value (namely log|𝒳|).

As mentioned, we also can deduce Geier’s result [5] as stated above directly from Theorem 1.1. A formal statement and comparison can be found in the full version of the paper, but we note that Geier’s quantitative bound (the specific polynomial loss in circuit complexity) is better than ours. In addition, Geier proves a version of the result for uniform distinguishers, whereas we only work with nonuniform distinguishers (i.e., boolean circuits). Both of these limitations come from the currently known theorems about multicalibration, which is the main technical tool we use (as discussed below), and it is interesting problem to obtain multicalibration theorems that provide tight quantitative bounds and yield uniform-complexity results in applications such as ours. The benefit of using multicalibration is that it provides a direct translation between the computational and information-theoretic setting (e.g. as captured in Theorem 1.1), which not only yields existing results as corollaries but also offers new ones, such as Theorem 1.5.

1.2 Asymptotic Complexity Formulations

In the foundations of cryptography, it is common to state computational indistinguishability results in terms of asymptotic polynomial complexity. In this section, we demonstrate the extensibility of our results to the asymptotic setting by presenting Theorem 1.5 in such a way, along with concrete definitions of this asymptotic regime. We start by formalizing indistinguishability in the asymptotic setting (i.e. with respect to ensembles of random variables):

Definition 1.8.

Let X={Xn}n and Y={Yn}n be ensembles of random variables, where each Xn,Yn are supported on {0,1}m(n) and m(n)=nO(1). For η:[0,1], we say that XηcompY if for all c, there exists an n0 such that for all nn0, Xn is (η(n)+nc)-indistinguishable from Yn by circuits of size nc.

Equivalently, we say XηcompY if there exists s(n)=nω(1), ε(n)=nω(1) such that for all n Xn is (η(n)+ε(n))-indistinguishable from Yn with respect to size s(n) circuits.111The equivalence of these two formulations goes back to Bellare [1].

For simplicity, we will also use XcompY to denote that there exists some function η:[0,1], η(n)=nω(1) such that XηcompY.

We also generalize the notion of pseudo-Hellinger distance (Definition 1.4) to the setting of ensembles of random variables:

Definition 1.9.

For ensembles of random variables X and Y, we say that X and Y have pseudo-Hellinger distance at most δ (denoted XδcHY) for a function δ:[0,1], if there exist ensembles X~={X~n},Y~={Y~n} such that XcompX~, YcompY~, and n, dH(X~n,Y~n)δ(n).

Finally, we require a notion of the sample complexity required to distinguish ensembles:

Definition 1.10.

We say that ensembles X,Y have computational sample complexity at least k: for k(n)=nO(1) if Xk1/2compYk (where Xk={Xnk(n)})222This choice of 1/2 is arbitrary, and can be amplified through repetition.. We denote this as XkcSY.

With these definitions, we can now state a corollary of our characterization of the indistinguishability of random variables in terms of their pseudo-Hellinger distance.

Corollary 1.11 (Asymptotic Formulation of Theorem 1.5).

Let X,Y be ensembles of random variables, let δ:[0,1], let k: be a function such that k(n)=nO(1), and let δ(n)=nO(1).

Then:

  1. 1.

    If XδcHY, then XkcSY for some k(n)=Ω(1δ(n)2).

  2. 2.

    If XkcSY, then XδcHY for some δ(n)=O(1k(n)).

We delegate the proof of this to the full version of the paper (in its appendix).

1.3 Technical Overview

Our work builds on the recent work of Casacuberta, Dwork, and Vadhan [2]. Driving inspiration from [4], they showed how the recent notion of multicalibration from the algorithmic fairness literature [10] leads to simpler and more illuminating proofs of a number of fundamental known results in complexity theory and cryptography, such as the Impagliazzo Hardcore Lemma [12], the Dense Model Theorem [14], and characterizations of pseudoentropy [16]. We show how multicalibration can be used to derive new results about computational indistinguishability (Theorems 1.1 and 1.5) and in general reduce computational reasoning to information-theoretic reasoning. Specifically, applying the Multicalibration Theorem [10] to distinguishing problems in a way inspired by [2], we obtain the following. Let X~b|p(X~b)=y be the distribution where x is sampled according to X~b conditioned on p(x)=y, for some function p:𝒳[m]. Let p(X0) be the distribution over [m] where for y[m], the probability that p(X0)=y is exactly xp1(y)X0(x).

Theorem 1.12.

For every pair of random variables X0,X1, every positive integer s, and every ε>0, there exist random variables X~0,X~1 and a function p:𝒳[m] for m=O(1/ε) such that:

  1. (a)

    X~0 is ε-indistinguishable from X0 and X~1 is ε-indistinguishable from X1 for circuits of size s.

  2. (b)

    p(X0) is identically distributed to p(X~0) and p(X1) is identically distributed to p(X~1).

  3. (c)

    For every y[m], X~0|p(X~0)=y is identically distributed to X~1|p(X~1)=y.

  4. (d)

    p is computable by circuits of size O(s/ε6).

Because the above theorem shows that X~0,X0 and X~1,X1 are ε-indistinguishable for circuits of size s, then by a simple hybrid argument, X~0k,X0k and X~1k,X1k are εk-indistinguishable for circuits of size s. Thus, one direction of Theorem 1.1 clearly follows given Theorem 1.12: size s circuits cannot distinguish X0k,X1k better than dTV(X~0k,X~1k)+2kε.

However, the other direction is more subtle; namely, showing that one can distinguish X0k,X1k with advantage approaching dTV(X~0k,X~1k) with small circuits. For this, we heavily rely on items (b), (c), and (d) from Theorem 1.12, as well as the property that m=O(1/ε). The intuition is the following: consider an element x that is sampled from either X~0,X~1, and let p(x)=y. By item (c), we know that once we condition on a value p(x)=y, X~0,X~1 are identically distributed. Thus, there is no information to be gained from the sample x besides the label of the partition that x is in (i.e., the value p(x)). In fact, this gives us a simple recipe for the optimal distinguisher between X~0,X~1: for each sample x we see, let us compute the value p(x)=y, and see how much more likely it is to sample an element in p1(y) under X~0 vs. X~1. We can simply keep a running product over all of the samples x1,,xk of the form

j=1kxX~1[p(x)=p(xj)]xX~0[p(x)=p(xj)].

If the above product is larger than 1, this means that a sequence of values is more likely under X~1, and if it is less than 1, then this means a sequence of partitions of more likely under X~0. In particular, calculating this simple expression and checking whether it is larger than 1 is an optimal distinguisher for X~0k,X~1k, as it is just the maximum likelihood decoder (that is to say, this decoder achieves distinguishing advantage dTV(X~0k,X~1k) between X~0k,X~1k). All that remains is to show that we can encode this distinguisher using small circuits. Here, we rely on point (d) of Theorem 1.12: computing the function p can be done with circuits of small size. Thus, for any element x, we compute the index of the partition that it is in p(x), and feed this index into a look-up table (encoded non-uniformly) such that it returns the value xX~1[p(x)=p(xj)]xX~0[p(x)=p(xj)]. Across all k elements we see, we then must only keep track of the product, and return whether the value it computes is 1. Here, we use the fact that p has a small domain to prove that we can actually encode the look-up table mapping y[m] to the values xX~1[p(x)=y]xX~0[p(x)=y] without using too large of a circuit. Of course, there are also issues with overflow and underflow in performing the numerical calculations, but we show this can be computed formally with small circuits in the full version of the paper.

Finally, it remains to analyze the distinguisher we have constructed here for X~0k,X~1k when we apply it to X0k,X1k. Here, we rely on Part (b) of Theorem 1.12. Indeed, the distinguisher relies only on the probabilities p(X~0),p(X~1) (i.e., the probability mass placed on each value of y). But, the distributions X0,X1 place exactly the same probability mass on each part of the partition (formally, p(Xb)=p(X~b)). Thus, whatever distinguishing advantage our distinguisher achieves on X~0k,X~1k is exactly the same as its distinguishing advantage over X0k,X1k. This then yields the second part of Theorem 1.1.

1.3.1 Multicalibrated Partitions

We now review the concept of multicalibration and describe how we prove Theorem 1.12 from the Multicalibration Theorem.

Multicalibration is a notion that first arose in the algorithmic fairness literature. Roughly speaking, its goal is, given a predictor h of an unknown function g, and a domain 𝒳 (which is often thought of as a population of individuals), to ensure that every desired subpopulation S𝒳 receives calibrated predictions from h. The subpopulations are specified by their characteristic functions f, which came from a family . More formally, we say that for domain 𝒳 and distribution 𝒟 over 𝒳, a function g:𝒳[0,1], and a class of functions f:𝒳[0,1], h is a (,ε) multicalibrated predictor for g with respect to 𝒟, if f, and vimage(h):

|𝔼x𝒟[f(x)(g(x)h(x))|h(x)=v]|ε.

The sets {h1(v):vImage(h)} define a partition of 𝒳, which gives rise to the following more convenient formulation. Let 𝒟|P denote the distribution 𝒟 conditioned on being in the subset P𝒳 of the domain.

We use a partition-based characterization of multicalibration as used in previous works [8, 7, 2]:

Definition 1.13 (Multicalibration [10] as formulated in [2]).

Let g:𝒳[0,1] be an arbitrary function, 𝒟 be a probability distribution over 𝒳, and for an integer s+, let (s) be the class of functions f:𝒳[0,1] computable by size s circuits. Let ε,γ>0 be constants. Then, we say that a partition 𝒫 of 𝒳 is ((s),ε,γ)-approximately multicalibrated for g on 𝒟 if for every f(s) and every P𝒫 such that x𝒟[xP]γ it holds that

|𝔼x𝒟|P[f(x)(g(x)vP)]|ε,

where we define vP:=𝔼x𝒟|P[g(x)].

For the class (s) of functions computable by size s circuits, let (s,k) denote the set of partitions 𝒫 such that there exists f^(s), f^:𝒳[k] and 𝒫={f^1(1),,f^1(k)}.

The result of Hébert-Johnson, Kim, Reingold, and Rothblum [10] can be stated as follows in the language of partitions:

Theorem 1.14 (Multicalibration Theorem [10]).

Let 𝒳 be a finite domain, for an integer s+, let (s) be the class of functions f:𝒳{0,1} computable by size s circuits, let g:𝒳[0,1] be an arbitrary function, 𝒟 be a probability distribution over 𝒳, and ε,γ>0. Then, there exists a (s,ε,γ)-approximately multicalibrated partition 𝒫 of 𝒳 for g on 𝒟 such that 𝒫(W,k), where W=O(s/(ε2γ2))+O(1/(ε4γ)log(|𝒳|/ε)) and k=O(1/ε).

In particular, one way to understand the above definition is that the partition 𝒫 breaks the domain into parts, such that within each part, for all functions f, the function g is ε-indistinguishable from its expected value.

Recently, this perspective on multicalibration has proven to be quite fruitful, with applications to machine learning [7], graph theory [4], and complexity theory and cryptography [4, 2]. Philosophically, the work of [2] showed how to reprove (and strengthen) theorems of average-case hardness and indistinguishability using multicalibration (specifically, the Impagliazzo Hardcore Lemma [12], the Dense Model Theorem [14], and characterizations of pseudoentropy [16]). One consequence of their work was a “loose” intuition that multicalibration translates questions about computational indistinguishability into questions about statistical indistinguishability. Our goal is to show this translation more clearly and generally, and underline directly how multicalibration reduces an understanding of computational indistinguishability to an understanding of statistical indistinguishability (as captured by Theorem 1.1).

1.3.2 Invoking Multicalibration

As mentioned above, multicalibration has already found a host of applications. However, for the specific purpose of distinguishing two distributions X0,X1, it is not immediately clear how to apply this framework. Of course, we can naturally view each distribution as given by its probability mass function from 𝒳 to [0,1], but in order to create multicalibrated partitions, we need a function g with respect to which the partitions should be calibrated. For this, our first observation is to use the following function g:

g(x)={0with probability [X0=x][X0=x]+[X1=x]1with probability [X1=x][X0=x]+[X1=x].

Roughly speaking, the function g is motivated by looking at the distribution 𝒟=12(X0+X1). In this distribution, the probability of seeing an element x is ([X0=x]+[X1=x])/2. However, we can also understand the sampling procedure as first choosing one of X0,X1 with probability 1/2, and then sampling from the chosen distribution. The probability that x comes from X0 is then [X0=x]/2, and similarly for X1. 𝔼x𝒟[g(x)] is thus measuring the relative fraction of the time that an element x comes from X1.

Importantly, g is now a well-defined function with respect to which we can construct a multicalibrated partition. Then we will define X~b for b{0,1} by replacing Xb|P with 𝒟|P.

We define this procedure formally below:

Definition 1.15.

For a pair of random variables X0,X1, positive integer s, and parameter ε, we define the distribution 𝒟, function g, random variables X~0,X~1 as follows.

  1. (a)

    Let the distribution 𝒟 be as follows: To sample from 𝒟, first pick B{0,1} uniformly at random. Output a sample xXB. Note that

    x𝒟[x]=12[X0=x]+12[X1=x].

    For a subset of the domain 𝒮𝒳, let 𝒟|𝒮 be the conditional distribution of 𝒟 over the set 𝒮.

  2. (b)

    We then define the randomized function g as:

    g(x)={0with probability [X0=x][X0=x]+[X1=x]1with probability [X1=x][X0=x]+[X1=x].
  3. (c)

    Random variables X~0,X~1: Consider the multicalibrated partition 𝒫={Pi} guaranteed by the Multicalibration Theorem when applied to the function g, distribution 𝒟 over domain 𝒳, class of functions f:𝒳[0,1] computable by size s circuits, and parameters ϵ and γ=ϵ2. Given the multicalibrated partition, construct the random variables X~b as follows: to sample according to X~b, first choose a piece of the partition Pi𝒫, where Pi is chosen with probability [XbPi]. Then sample x𝒟|Pi.

With this definition, we can provide an informal overview of the proof of Theorem 1.12. We start with item (b), as its proof is essentially definitional. Recall that this item states that p(X0) and p(X~0) are identically distributed (and analogously for X1). This follows because in the construction of the random variable X~0, we sample an element from part Pi with probability Pr[X0Pi], and thus necessarily, the probability mass placed on each part Pi is identical between the two distributions. Next, we can also see that part (c) is definitional. In the construction of X~0 and X~1, conditioned on being in the piece Pi, each of the marginal distributions is exactly 𝒟Pi, and thus the marginal distributions match. Next, to conclude item (d) of Theorem 1.12, recall that the partition function p is exactly returned by the Multicalibration Theorem. Here, we can take advantage of the fact that multicalibrated partitions are efficiently computable given oracle access to the underlying set of test functions (in this case, size s circuits). By replacing these oracles with the size s circuits, this yields an overall complexity of O(s/ε2) for the circuit size required to compute p. Finally, it remains to prove item (a) of Theorem 1.12. Because the partition 𝒫 that is returned is multicalibrated with respect to our function g, by applying Theorem 2.15 of [2] and following the approach of the DMT++ proof of [2], we can show that for every part Pi𝒫 and every circuit of size s, the distributions X0|Pi and X~0|Pi are close to indistinguishable (and likewise for X1). By combining this “local” indistinguishability across all parts of the partition, this then yields the overall indistinguishability between X~0,X~1.

It is worth comparing our use of the Multicalibration Theorem to the use of Impagliazzo’s Hardcore Theorem [12, 11] in some past works on computational indistinguishability and pseudorandomness [9, 13]. Applied to the same function g above, the Hardcore Theorem says that if X0,X1 are (s,1δ) -indistinguishable, for s=O(spoly(1/ε,1/δ)), then there is a subset HX such that PrxX0[xH]=PrxX1[xH]=δ such that X0|H and X1|H are (s,ε)-indistinguishable. Replacing X0|H and X1|H with their average as above, we obtain X~0,X~1 that are (s,ε)-indistinguishable from X0,X1 respectively, and such that dTV(X~0,X~1)1δ. Thus, the Hardcore Lemma tightly captures the single-sample computational indistinguishability of X0 and X1 by the single-sample information-theoretic indistinguishability of X~0 and X~1. But it does not seem to capture enough information to obtain an instance-optimal characterization of the multiple-sample indistinguishability, as we do. Concretely, the Hardcore Lemma requires assuming some initial average-case hardness of g (which comes from the weak indistinguishability of X0 and X1) and only gives us indistinguishability from an information-theoretic counterpart on a small subset H𝒳, which is not guaranteed to be efficiently recognizable. In contrast, the partition given by the Multicalibration Theorem covers the entire domain 𝒳 with efficiently recognizable sets, and this is crucial for our instance-optimal characterization.

1.4 Organization of the paper

Sections 3 and additional sections from the full version prove the different components of Theorem 1.1. In Section 3, we apply the Multicalibration Theorem to prove Theorem 1.12. Theorem 1.12 Part (a) is equivalent to Theorem 1.1 Part (1). The full version presents a proof of Theorem 1.1 Parts (2) and (3), as well as a more general version of the statement we prove, which encompasses indistinguishability against families of functions beyond size s circuits. Also in the full version are proofs of Theorem 1.1 Part (4), and a general version of the statement, again for broader families of functions, which all rely on Theorem 1.12.

The full version of the paper also shows how our results and analysis imply a result comparable to the main theorem proven in [9, 5], and how it can be used to prove Theorem 1.5 and Theorem 1.7, which characterize distinguishability in terms of pseudo-Hellinger distance and pseudo-Rényi 12-entropy (as defined in Definition 1.4 and Definition 1.6). Lastly, the full version also presents a generalization of our results to the distinguishability of general product distributions (instead of X0k versus X1k).

2 Preliminaries

2.1 Distinguishability

Hellinger distance can be used to characterize the ability to distinguish two distributions when we receive multiple samples:

Claim 2.1.

[See [15, 17]]

  1. 1.

    dH2(X,Y)dTV(X,Y)2dH(X,Y).

  2. 2.

    dH2(Xm,Ym)=1(1dH2(X,Y))m[1emdH2(X,Y),mdH2(X,Y)].

Combining the above gives us that

1emdH2(X,Y)dTV(Xm,Ym)2mdH(X,Y). (3)

As a corollary, this implies that m=Θ(1/dH2(X,Y)) samples are necessary and sufficient to increase the total variation distance dTV(Xm,Ym) up to a constant.

Definition 2.2.

Let X be a random variable over 𝒳. The Rényi entropy of order 1/2 of X is defined as:

H1/2(X)=2log2(i𝒳Pr[X=i]).

It can be shown that H1/2(X)log2|𝒳| with equality if and only if X is uniform on 𝒳. More generally, the gap in this inequality exactly measures the Hellinger distance to the uniform distribution on 𝒳.

Claim 2.3.

Let X be a random variable over 𝒳 and let 𝒰 be the uniform distribution over 𝒳. Then

dH2(X,𝒰)=12H1/2(X)|𝒳|.
Proof.

We can rewrite dH2(X,𝒰) as follows:

dH2(X,𝒰) =12i𝒳([X=i]1|𝒳|)2=12(1+12[X=i]|𝒳|)
=11|𝒳|i𝒳[X=i]=12H1/2(X)|𝒳|.

Definition 2.4 (Statistical indistinguishability).

Random variables X and Y are called ε-statistically indistinguishable if dTV(X,Y)ε.

We define computational indistinguishability for a class of functions :

Definition 2.5 (Computational indistinguishability).

Random variables X and Y over domain 𝒳 are ε-indistinguishable with respect to if for every f:

|x𝒳f(x)([X=x][Y=x])|ε.

Random variables X and Y are ε-distinguishable with respect to if for every f:

|x𝒳f(x)([X=x][Y=x])|ε.
Definition 2.6 (Oracle-aided circuits and partitions).

For a class of functions, let q,t be the class of functions that can be computed by an oracle-aided circuit of size t with q oracle gates, where each oracle gate is instantiated with a function from . Let q,t,k denote the set of partitions 𝒫 such that there exists f^q,t, f^:𝒳[k] and 𝒫={f^1(1),,f^1(k)}.

Note that when we restrict our attention to being the set of functions computable by size s circuits, then q,t is simply the set of functions computable by size t+qs circuits. The discussion in the introduction is using exactly this simplification.

2.2 Multicalibration

We now recall the definition of multicalibration [10, 7, 8, 3], but state it in its full generality (with respect to arbitrary classes of functions ):

Definition 2.7 (Multicalibration [10] as formulated in [2]).

Let g:𝒳[0,1] be an arbitrary function, 𝒟 be a probability distribution over 𝒳, and be a class of functions f,f:𝒳[0,1]. Let ε,γ>0 be constants. A partition 𝒫 of 𝒳 is (,ε,γ)-approximately multicalibrated for g on 𝒟 if for every f and every P𝒫 satisfying x𝒟[xP]γ it holds that

|𝔼x𝒟|P[f(x)(g(x)vP)]|ε,

where we define vP:=𝔼x𝒟|P[g(x)].

As mentioned above, [10] shows how to construct multicalibrated partitions with low-complexity relative to the class of functions . Their main result can be stated as:

Theorem 2.8 (Multicalibration Theorem [10]).

Let 𝒳 be a finite domain, a class of functions, with f:f:𝒳[0,1], g:𝒳[0,1] an arbitrary function, 𝒟 a probability distribution over 𝒳, and ε,γ>0. Then, there exists a (,ε,γ)-approximately multicalibrated partition 𝒫 of 𝒳 for g on 𝒟 such that 𝒫q,t,k for t=O(1/(ε4γ)log(|𝒳|/ε)), q=O(1/(ε2γ2)), and k=O(1/ε).

3 Applying the Multicalibration Theorem to Prove Theorem 1.12

In this section, we apply the Multicalibration Theorem to prove the following theorem, which is a generalization of Theorem 1.12. In later sections, we will use this theorem to prove the different components of Theorem 1.1.

Theorem 3.1 (General version of Theorem 1.12).

For every pair of random variables X0,X1, every family of functions f:𝒳[0,1], and every ε>0, there exist random variables X~0,X~1 and a function p:𝒳[m] for m=O(1/ε) such that:

  1. (a)

    X~0 is ε-indistinguishable from X0 and X~1 is ε-indistinguishable from X1 for functions f.

  2. (b)

    p(X0) is identically distributed to p(X~0) and p(X1) is identically distributed to p(X~1).

  3. (c)

    For every y[m], X~0|p(X~0)=y is identically distributed to X~1|p(X~1)=y.

  4. (d)

    p is computable by functions in O(1/ε6),O(1/(ε6)log(|𝒳|)log(|𝒳|/ε)).

Definitions and notation used throughout the proof

The random variables X~0,X~1 and function p:𝒳[m] in Theorem 3.1 are constructed as follows, by drawing a connection to the multicalibrated partition guaranteed by the Multicalibration Theorem.

Definition 3.2.

(General version of Definition 1.15) For a pair of random variables X0,X1, class of functions, and parameter ε, we define the family of functions , distribution 𝒟, function g, random variables X~0,X~1, and function p:𝒳[m] as follows.

  1. (a)

    Let be a family of functions such that c,clog|𝒳|. For example, if corresponds to size s circuits, then is the family of functions given by circuits of size at most sc+clog|𝒳|, for some universal constant c.

  2. (b)

    Let the distribution 𝒟 be as follows: To sample from 𝒟, first pick B{0,1} uniformly at random. Output a sample xXB. Note that

    x𝒟[x]=12[X0=x]+12[X1=x].

    For a subset of the domain 𝒮𝒳, let 𝒟|𝒮 be the conditional distribution of 𝒟 over the set 𝒮.

  3. (c)

    We then define the randomized function g as:

    g(x)={0with probability [X0=x][X0=x]+[X1=x]1with probability [X1=x][X0=x]+[X1=x].
  4. (d)

    Random variables X~0,X~1: Consider the multicalibrated partition 𝒫={Pi} guaranteed by the Multicalibration Theorem when applied to the function g, distribution 𝒟 over domain 𝒳, class of functions f:𝒳[0,1], and parameters ϵ and γ=ϵ2. Given the multicalibrated partition, construct the random variables X~b as follows: to sample according to X~b, first choose a piece of the partition Pi𝒫, where Pi is chosen with probability [XbPi]. Then sample x𝒟|Pi.

    Let m=|𝒫| be the number of parts in this partition 𝒫.

  5. (e)

    We let p:𝒳[m] be the function that returns which part of the multicalibrated partition 𝒫 an element x𝒳 is in. That is, if xPi for Pi𝒫, p(x)=i.

Given this setup of X~0, X~1, and p, we are now ready to prove the different parts of Theorem 3.1.

3.1 Proof of Part (a)

We first analyze the behavior of the random variables over parts of the partition given by the Multicalibration Theorem. We begin by studying the indistinguishability of X0|Pi versus X1|Pi, relying on the following lemma from [2].

Lemma 3.3 (Lemma 2.15 [2]).

Let ={h:𝒳[0,1]} be a class of functions that is closed under negation and contains the all-zero (h(x)=0 for all x) and all-one (h(x)=1 for all x) functions. Let 𝒟 be a distribution over 𝒳 and consider ε>0. Let be any family of functions such that clog|𝒳|,c for a universal constant c.

Suppose that g:𝒳[0,1] is identified with a randomized Boolean-valued function and is (,ε)-indistinguishable from the constant function v:=𝔼x𝒟[g(x)]. Then the distribution 𝒟|g(x)=1 is (,εv(1v))-indistinguishable from 𝒟|g(x)=0 for .

Lemma 3.4.

For random variables X0,X1 and family of functions , consider the construction of the family of functions , the distribution 𝒟, function g, and partition 𝒫 as in Definition 3.2. For Pi𝒫, let vPi=𝔼x𝒟|Pi[g(x)].

For all Pi𝒫 such that x𝒟[xPi]γ, X0|Pi is (,εvPi(1vPi))-indistinguishable from X1|Pi.

Proof.

Similarly to the approach in the proof of Theorem 5.3 (DMT++) in [2], we consider the distribution 𝒟 that picks B{0,1} at random and outputs a sample of xXB, and the randomized function g that outputs B. This precisely corresponds to the choice of 𝒟,g corresponding to X0,X1 and from Definition 3.2. As in the proof of Theorem 5.3 in [2], we will show that for all Pi𝒫, 𝒟Pi|g(x)=0=X0|Pi and 𝒟Pi|g(x)=1=X1|Pi; we can therefore use Lemma 3.3 (setting =) to transfer the indistinguishability of g from vPi over each part of the partition Pi𝒫 to the indistinguishability of X0|Pi and X1|Pi. More specifically, the Multicalibration Theorem guarantees that 𝒫 is a low-complexity partition with O(1/ε) parts such that, on each Pi𝒫 with x𝒟[xPi]γ, g is ε-indistinguishable from the corresponding constant function vPi:=𝔼x𝒟Pi[g(x)]. Applying Lemma 3.3 with set to be therefore implies that for each part Pi of the partition 𝒫 such that x𝒟[xPi]γ, X1|Pi is (,εvPi(1vPi))-indistinguishable from X0|Pi for any class of functions such that clog|𝒳|,c, for a universal constant c.

We work out the details for showing 𝒟Pi|g(x)=1=X1|Pi below, and the proof of 𝒟Pi|g(x)=0=X0|Pi follows similarly. In what follows, let rand(g) denote that the probability of an event is taken over the randomness of the randomized Boolean-valued function g.

We see that

x𝒟Pi|g(x)=1[x]=rand(g)[g(x)=1|x]xDPi[x]rand(g),x𝒟Pi[g(x)=1]. (4)

Now, the denominator equals:

rand(g),x𝒟|Pi[g(x)=1] =121𝒟[Pi]x𝒳x𝒟[x][X1=x][X0=x]+[X1=x]
=[X1Pi][X0Pi]+[X1Pi].

Similarly, the numerator equals:

rand(g)[g(x)=1|x]x𝒟|Pi[x]=[X1=x][X0Pi]+[X1Pi].

Therefore, Equation (4) gives:

x𝒟Pi|g(x)=1[x]=[X1=x][X1Pi]=[X1|Pi=x],

which means that 𝒟Pi|g(x)=1 is equivalent to X1|Pi.

We next prove that we can represent 𝒟Pi as a convex combination of X0|Pi and X1|Pi.

Lemma 3.5.

For random variables X0,X1 and family of functions , consider the construction of the distribution 𝒟 and partition 𝒫 as in Definition 3.2. For every Pi𝒫, define

α0(Pi)=[X0Pi][X0Pi]+[X1Pi] and α1(Pi)=[X1Pi][X0Pi]+[X1Pi].

Then 𝒟|Pi=α0(Pi)X0|Pi+α1(Pi)X1|Pi.

Proof.

For xPi,

𝒟|Pi[x]=𝒟[x]𝒟[Pi]=12[X0=x]+12[X1=x]12[X0Pi]+12[X1Pi]=[X0=x]+[X1=x][X0Pi]+[X1Pi].

Relating this to the conditional distributions X0|Pi and X1|Pi of X0 and X1, this expression equals:

=1[X0Pi]+[X1Pi]([X0|Pi=x][X0Pi]+[X1|Pi=x][X1Pi]).

Given the definition of α0(Pi) and α1(Pi), we see that

𝒟|Pi[x]=α0(Pi)[X0|Pi=x]+α1(Pi)[X1|Pi=x]. (5)

Stated more concisely, 𝒟|Pi=α0(Pi)X0|Pi+α1(Pi)X1|Pi. Note that α0(Pi)0, α1(Pi)0, and α0(Pi)+α1(Pi)=1, so this is indeed a convex combination.

Next we prove that X0 is computationally indistinguishable from X~0 and X1 is computationally indistinguishable from X~1 on every part of the partition 𝒫 guaranteed by applying the Multicalibration Theorem to g. We begin by noting that X1|Pi is (,εα1(Pi)(1α1(Pi)))-indistinguishable from X0|Pi, which is a simple consequence of the following observation.

Observation.

Note that, for all Pi𝒫, vPi=α1(Pi), which can be seen as follows:

vPi =𝔼x𝒟|Pi[g(x)]=xPix𝒟|Pi[x][X1=x][X0=x]+[X1=x]
=xPix𝒟[x]𝒟[Pi][X1=x][X0=x]+[X1=x]
=121𝒟[Pi]xPi[X1=x]=12112[X0Pi]+12[X1Pi][X1Pi]
=[X1Pi][X0Pi]+[X1Pi].
Lemma 3.6.

For random variables X0,X1 and family of functions , consider the construction of the family of functions , the distribution 𝒟, function g, and partition 𝒫 as in Definition 3.2. For Pi𝒫, consider the definitions of α0(Pi) and α1(Pi) as in Lemma 3.5.

For all Pi𝒫 such that x𝒟[xPi]γ, X0|Pi is (,ε1α1(Pi))-indistinguishable from X~0|Pi and X1|Pi is (,εα1(Pi))-indistinguishable from X~1|Pi.

The idea behind the proof of this lemma is as follows. First, since for all Pi𝒫, X~0|Pi is equivalent to 𝒟|Pi, we want to argue that 𝒟|Pi must also be indistinguishable from X0|Pi on all Pi𝒫 such that x𝒟[xPi]γ. This is implied by the facts that 𝒟|Pi is a convex combination of X0|Pi and X1|Pi and these two random variables are indistinguishable for these Pi𝒫.

Proof.

By definition of X~0, for xPi:

[X~0=x]=[X0Pi]𝒟|Pi[x].

From Lemma 3.5, this equals:

=[X0Pi](α0(Pi)[X0|Pi=x]+α1(Pi)[X1|Pi=x]).

Define X~0|Pi to be the random variable such that for xPi, [X~0|Pi=x]=[X~0=x|xPi]. Note that this equals 𝒟|Pi[x]. Recall that, from Lemma 3.4, for all Pi𝒫 such that x𝒟[xPi]γ X1|Pi is (,εα1(Pi)(1α1(Pi)))-indistinguishable from X0|Pi.

To show that X0|Pi is (,ε1α1(Pi))-indistinguishable from X~0|Pi we need to bound, for every f: |xPif(x)(𝒟|Pi[x]X0|Pi[x])|.

We see that

|xPif(x)(𝒟|Pi[x]X0|Pi[x])|
=|xPif(x)((α0(Pi)1)X0|Pi[x]+α1(Pi)X1|Pi[x])|
=α1(Pi)|xPif(x)(X0|Pi[x]X1|Pi[x])|ε1α1(Pi).

Similarly, we can also show that X1|Pi is (,εα1(Pi))-indistinguishable from X~1|Pi.

We are finally ready to prove Theorem 3.1 part (a).

Proof.

For random variables X0,X1 and family of functions , consider the construction of the family of functions , the distribution 𝒟, function g, and partition 𝒫 as in Definition 3.2.

We use the analysis of indistinguishability of random variables over all Pi𝒫 such that x𝒟[xPi]γ to prove the indistinguishability of X0 and X~0 globally over the domain. Because we have results about indistinguishability over parts of the partition that have enough weight with respect to 𝒟, but not those whose weight is too small, we need to break up the analysis to handle both types of Pi𝒫. Let 𝒫(γ) be the set of Pi𝒫 such that x𝒟[xPi]γ.

We focus on the indistinguishability of X0 from X~0 in the proof. The proof of indistinguishability of X1 from X~1 follows by similar arguments.

|Pi𝒫xPif(x)([X0=x][X~0=x])|
Pi𝒫(γ)|xPif(x)([X0=x][X~0=x])|
+Pi𝒫𝒫(γ)|xPif(x)([X0=x][X~0=x])|. (6)

Let us focus on the first of the two summations in Equation (6). We see

Pi𝒫(γ)|xPif(x)([X0=x][X~0=x])|
=Pi𝒫(γ)[X0Pi]|xPif(x)([X0|Pi=x][X~0|Pi=x])|.

Applying Lemma 3.6, this is:

Pi𝒫(γ)[X0Pi]εα0(Pi)=Pi𝒫(γ)α0(Pi)2[𝒟Pi]εα0(Pi)=2ε.

Let us now focus on the second of the two summations in Equation (6). We see

Pi𝒫𝒫(γ)|xPif(x)([X0=x][X~0=x])|Pi𝒫𝒫(γ)([X0Pi]+[X~0Pi])
=Pi𝒫𝒫(γ)2[X0Pi]<4γ|𝒫|,

where the last inequality follows from observing that 12[X0Pi][𝒟Pi]<γ. Note that, by definition of γ and by the bounds on |𝒫| from the Multicalibration Theorem, 4γ|𝒫| is O(ε).

Combining the different components of the analysis above, we find that:

|Pi𝒫xPif(x)([X0=x][X~0=x])|O(ε).

Therefore, X0 is (,O(ε))-indistinguishable from X~0. Similarly, X1 is (,O(ε))-indistinguishable from X~1.

Additionally, to conclude the theorem, note that by definition of X~0,X~1, for the partition 𝒫 given by the Multicalibration Theorem that X~0,X~1 are defined according to, for all Pi𝒫, X~0|Pi=X~1|Pi.

3.2 Proof of Parts (b, c, d)

Here, we prove the remaining properties of Theorem 3.1:

Proof.

For random variables X0,X1 and family of functions , consider the construction of the family of functions , the distribution 𝒟, function g, partition 𝒫, and function p:𝒳[m] as in Definition 3.2. We show the following:

Claim 3.7 (Part (b) of Theorem 3.1).

p(X0) is identically distributed to p(X~0) and p(X1) is identically distributed to p(X~1).

Proof.

We show this WLOG for X0,X~0. This follows by definition. Recall that to construct X~0, we sample part Pi with probability Pr[X0Pi], and then replace the conditional distribution over Pi with 𝒟 (as defined in Definition 3.2). Thus, for any Pi𝒫, we have Pr[X~0Pi]=Pr[X0Pi].

Claim 3.8 (Part (c) of Theorem 3.1).

For every in y[m], X~0|p(X~0)=y is identically distributed to X~1|p(X~1)=y.

Proof.

Again, this is merely definitional. As in Definition 3.2, for any part Pi𝒫, we define

X~1|Pi=𝒟Pi,

and likewise

X~0|Pi=𝒟Pi.

Thus, the two distributions have the same marginal when conditioned on being in any part Pi. We conclude by recalling that the parts Pi are exactly the pieces P1(y), for y[m], hence yielding the statement.

Claim 3.9 (Part (d) of Theorem 3.1).

p is computable by circuits in

O(1/ε6),O(1/(ε6)log(|𝒳|)log(|𝒳|/ε)).
Proof.

Recall that in Definition 3.2, p is exactly the partition function that results from constructing a multicalibrated partition with parameters ε,γ=ε2 on the family (where =c,clog(|𝒳|)). As in Theorem 2.8, such a partition function can be computed by a function in

O(1/ε6),O(1/(ε6)log(|𝒳|/ε)).

This yields the claim.

The proof of Theorem 3.1 then follows from the union of each of the individual claims.

References

  • [1] Mihir Bellare. A note on negligible functions. J. Cryptol., 15(4):271–284, 2002. doi:10.1007/S00145-002-0116-X.
  • [2] Sílvia Casacuberta, Cynthia Dwork, and Salil Vadhan. Complexity-theoretic implications of multicalibration. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, pages 1071–1082, 2024. doi:10.1145/3618260.3649748.
  • [3] Sílvia Casacuberta, Cynthia Dwork, and Salil P. Vadhan. Complexity-theoretic implications of multicalibration. CoRR, abs/2312.17223, 2023. doi:10.48550/arXiv.2312.17223.
  • [4] Cynthia Dwork, Daniel Lee, Huijia Lin, and Pranay Tankala. From pseudorandomness to multi-group fairness and back. In The Thirty Sixth Annual Conference on Learning Theory, pages 3566–3614. PMLR, 2023.
  • [5] Nathan Geier. A tight computational indistinguishability bound for product distributions. In Theory of Cryptography Conference, pages 333–347. Springer, 2022. doi:10.1007/978-3-031-22365-5_12.
  • [6] Shafi Goldwasser and Silvio Micali. Probabilistic encryption. J. Comput. Syst. Sci., 28(2):270–299, 1984. doi:10.1016/0022-0000(84)90070-9.
  • [7] Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder. Omnipredictors. In Mark Braverman, editor, 13th Innovations in Theoretical Computer Science Conference, ITCS 2022, January 31 - February 3, 2022, Berkeley, CA, USA, volume 215 of LIPIcs, pages 79:1–79:21. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.ITCS.2022.79.
  • [8] Parikshit Gopalan, Michael P. Kim, Mihir Singhal, and Shengjia Zhao. Low-degree multicalibration. In Po-Ling Loh and Maxim Raginsky, editors, Conference on Learning Theory, 2-5 July 2022, London, UK, volume 178 of Proceedings of Machine Learning Research, pages 3193–3234. PMLR, 2022. URL: https://proceedings.mlr.press/v178/gopalan22a.html.
  • [9] Shai Halevi and Tal Rabin. Degradation and amplification of computational hardness. In Ran Canetti, editor, Theory of Cryptography, Fifth Theory of Cryptography Conference, TCC 2008, New York, USA, March 19-21, 2008, volume 4948 of Lecture Notes in Computer Science, pages 626–643. Springer, 2008. doi:10.1007/978-3-540-78524-8_34.
  • [10] Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, and Guy N. Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 1944–1953. PMLR, 2018. URL: http://proceedings.mlr.press/v80/hebert-johnson18a.html.
  • [11] Thomas Holenstein. Key agreement from weak bit agreement. In Harold N. Gabow and Ronald Fagin, editors, Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, May 22-24, 2005, pages 664–673. ACM, 2005. doi:10.1145/1060590.1060689.
  • [12] Russell Impagliazzo. Hard-core distributions for somewhat hard problems. In Proceedings of IEEE 36th Annual Foundations of Computer Science, pages 538–545. IEEE, 1995. doi:10.1109/SFCS.1995.492584.
  • [13] Ueli M. Maurer and Stefano Tessaro. A hardcore lemma for computational indistinguishability: Security amplification for arbitrarily weak prgs with optimal stretch. In Daniele Micciancio, editor, Theory of Cryptography, 7th Theory of Cryptography Conference, TCC 2010, Zurich, Switzerland, February 9-11, 2010. Proceedings, volume 5978 of Lecture Notes in Computer Science, pages 237–254. Springer, 2010. doi:10.1007/978-3-642-11799-2_15.
  • [14] Omer Reingold, Luca Trevisan, Madhur Tulsiani, and Salil Vadhan. Dense subsets of pseudorandom sets. In 2008 49th Annual IEEE Symposium on Foundations of Computer Science, pages 76–85. IEEE, 2008. doi:10.1109/FOCS.2008.38.
  • [15] Ton Steerneman. On the total variation and hellinger distance between signed measures; an application to product measures. Proceedings of the American Mathematical Society, 88(4):684–688, 1983.
  • [16] Salil Vadhan and Colin Jia Zheng. Characterizing pseudoentropy and simplifying pseudorandom generator constructions. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 817–836, 2012. doi:10.1145/2213977.2214051.
  • [17] David Woodruff. Cs 15-859: Algorithms for big data: Lecture 9, 2019.