All Classical Adversary Methods are Equivalent for Total Functions

We show that all known classical adversary lower bounds on randomized query complexity are equivalent for total functions, and are equal to the fractional block sensitivity $\text{fbs}(f)$. That includes the Kolmogorov complexity bound of Laplante and Magniez and the earlier relational adversary bound of Aaronson. For partial functions, we show unbounded separations between $\text{fbs}(f)$ and other adversary bounds, as well as between the relational and Kolmogorov complexity bounds. We also show that, for partial functions, fractional block sensitivity cannot give lower bounds larger than $\sqrt{n \cdot \text{bs}(f)}$, where $n$ is the number of variables and $\text{bs}(f)$ is the block sensitivity. Then we exhibit a partial function $f$ that matches this upper bound, $\text{fbs}(f) = \Omega(\sqrt{n \cdot \text{bs}(f)})$.


Introduction
Query complexity of functions is one of the simplest and most useful models of computation.It is used to show lower bounds on the amount of time required to solve a computational task, and to compare the capabilities of the quantum, randomized and deterministic models of computation.Thus providing lower bounds in the query model is essential in understanding the complexity of computational problems.
In the query model, an algorithm has to compute a function f : S → H, given a string x from S ⊆ G n , where G and H are finite alphabets.With a single query, it can provide the oracle with an index i ∈ [n] and receive back the value x i .After a number of queries (possibly, adaptive), the algorithm must compute f (x).The cost of the computation is the number of queries made by the algorithm.
The query complexity of a function f in the deterministic setting is denoted by D(f ) and is also called the decision tree complexity.The two-sided bounded-error randomized and quantum query complexities are denoted by R(f ) and Q(f ), respectively (which means that given any input, the algorithm must produce a correct answer with probability at least 2/3).For a comprehensive survey on the power of these models, see [BdW02], and for the state-of-the-art relationships between them, see [ABDK16].
In this work, we investigate the relation among a certain set of lower bound techniques on R(f ), called the classical adversary methods, and how they connect to other well-known lower bounds on the randomized query complexity.

Known Lower Bounds
One of the first general lower bound methods on randomized query complexity is Yao's minimax principle, which states that it is sufficient to exhibit a hard distribution on the inputs and lower bound the complexity of any deterministic algorithm under such distribution [Yao77].Yao's minimax principle is known to be optimal for any function but involves a hard-to-describe and hard-to-compute quantity (the complexity of the best deterministic algorithm under some distribution).
More concrete randomized lower bounds are block sensitivity bs(f ) [Nis89] and the approximate degree of the polynomial representing the function deg(f ) [NS94] introduced by Nisan and Szegedy.Afterwards, Aaronson extended the notion of the certificate complexity C(f ) (a deterministic lower bound) to the randomized setting by introducing randomized certificate complexity RC(f ) [Aar08].Following this result, both Tal and Gilmer, Saks and Srinivasan independently discovered the fractional block sensitivity fbs(f ) lower bound [Tal13,GSS16], which is equal to the fractional certificate complexity FC(f ) measure, as respective dual linear programs.Since these measures are relaxations of block sensitivity and certificate complexity if written as integer programs, they satisfy the following hierarchy: bs(f ) ≤ fbs(f ) = FC(f ) ≤ C(f ).
Perhaps surprisingly, fractional block sensitivity turned out to be equivalent to randomized certificate complexity, fbs(f ) = Θ(RC(f )).Approximate degree and fractional block sensitivity are incomparable in general, but it has been shown that fbs(f ) ≤ deg(f ) 2 [KT16] and deg(f Currently one of the strongest lower bounds is the partition bound prt(f ) of Jain and Klauck [JK10], which is larger than all of the above mentioned randomized lower bounds (even the approximate degree), and the classical adversary methods listed below.Its power is illustrated by the Tribes n function (an And of √ n Ors on √ n variables), where it gives a tight Ω(n) lower bound, while all of the other lower bounds give only O( √ n).The quantum query complexity Q(f ) is also a powerful lower bound on R(f ), as it is incomparable with prt(f ) [AKK16].Recently, Ben-David and Kothari introduced the randomized sabotage complexity RS(f ) lower bound, which can be even larger than prt(f ) and Q(f ) for some functions [BDK16], and so far no examples are known where it is smaller.
In a separate line of research, Ambainis gave a versatile quantum adversary lower bound method with a wide range of applications [Amb00].Since then, many generalizations of the quantum adversary method have been introduced (see [ ŠS06] for a list of known quantum adversary bounds).Several of these formulations have been lifted back to the randomized setting.Aaronson proved a classical analogue of Ambainis' relational adversary bound and used it to provide a lower bound for the local search problem [Aar06].Laplante and Magniez introduced the Kolmogorov complexity adversary bound for both quantum and classical settings and showed that it subsumes many other adversary techniques.[LM04].They also gave a classical variation of Ambainis' adversary bound in a different way than Aaronson.Some of the other adversary methods like spectral adversary have not been generalized back to the randomized setting.
While some relations between the adversary bounds had been known before, Špalek and Szegedy proved that practically all known quantum adversary methods are in fact equivalent [ ŠS06] (this excludes the general quantum adversary bound, which gives an exact estimate on quantum query complexity for all Boolean functions [HLS07,Rei09]).This result cannot be immediately generalized to the classical setting, as the equivalence follows through the spectral adversary which has no classical analogue.They also showed that the quantum adversary cannot give lower bounds better than a certain "certificate complexity barrier".Recently, Kulkarni and Tal strenghtened the barrier using fractional certificate complexity.Specifically, for any Boolean function f the quantum adversary is at most FC 0 (f ) FC1 (f ), if f is total, and at most 2 n • min{FC 0 (f ), FC 1 (f )}, if f is partial [KT16]. 1  With the advances on the quantum adversary front, one could hope for a similar equivalence result to also hold for the classical adversary bounds.Some relations are known: Laplante and Magniez have shown that the Kolmogorov complexity lower bound is at least as strong as Aaronson's relational and Ambainis' weighted adversary bounds [LM04].Jain and Klauck have noted that the minimax over probability distributions adversary bound is at most C(f ) for total functions [JK10].In general, the relationships among the classical adversary bounds until this point remained unclear.

Our Results
Our main result shows that the known classical adversary bounds are all equivalent for total functions.That includes Aaronson's relational adversary bound CRA(f ), Ambainis' weighted adversary bound CWA(f ), the Kolmogorov complexity adversary bound CKA(f ) and the minimax over probability distributions adversary bound CMM(f ).Surprisingly, they are equivalent to the fractional block sensitivity fbs(f ).
We also add to this list a certain restricted version of the relational adversary bound.More specifically, we require that the relation matrix between the inputs has rank 1, and denote this (seemingly weaker) lower bound by CRA 1 (f ).Thus for total functions CRA(f ) = Θ (CRA 1 (f )), where the latter is much easier to calculate for Boolean functions.
All this shows that fbs(f ) is a fundamental lower bound measure for total functions with many different formulations, including the previously known FC(f ) and RC(f ).Another interesting corollary is that since the quantum certificate complexity QC(f ) = Θ( RC(f )) is a lower bound on the quantum query complexity [Aar08], we have that by taking the square root of any of the adversary bounds above, we obtain a quantum lower bound for total functions.
Along the way, for partial functions we show the equivalence between CRA(f ) and CWA(f ), and also between CKA(f ) and CMM(f ).In the case of partial functions, fbs(f ) becomes weaker than all these adversary methods.In particular, we show an example of a function where each of these adversary methods gives an Ω(n) lower bound, while fractional block sensitivity is O(1).We also show that CRA(f ) and CMM(f ) are not equivalent for partial functions, as there exists an example where CRA(f ) is constant, but CMM(f ) = Θ(log n).Finally, we show a function such that CRA 1 (f ) = O( √ n), but CRA(f ) = Ω(n).We also show a "block sensitivity" barrier for fractional block sensitivity.Namely, for any partial function f , the fractional block sensitivity is at most n • bs(f ).Note that the adversary bounds do not bear this limitation, as witnessed by the aforementioned example.This result is tight, as we exhibit a partial function that matches this upper bound.
Even though our results are similar to the quantum case in [ ŠS06] in spirit, the proof methods are different.

Preliminaries
In this section we define the complexity measures we are going to work with in the paper.In the following definitions and the rest of the paper consider f to be a partial function f : S → H with domain S ⊆ G n , where G, H are some finite alphabets and n is the length of the input string.Throughout the paper we assume that f is not constant.Block Sensitivity.For x ∈ S, a subset of indices B ⊆ [n] is a sensitive block of x if there exists a y such that f (x) = f (y) and B = {i | x i = y i }.The block sensitivity bs(f, x) of f on x is the maximum number k of disjoint subsets B 1 , . . ., B k ⊆ [n] such that B i is a sensitive block of x for each i ∈ [k].The block sensitivity of f is defined as bs(f ) = max x∈S bs(f, x).
Let B = {B | ∃y : f (x) = f (y) and B = {i | x i = y i }} be the set of sensitive blocks of x.The fractional block sensitivity fbs(f, x) of f on x is defined as the optimal value of the following linear program: Here w x ∈ [0; 1] |B| .The fractional block sensitivity of f is defined as fbs(f ) = max x∈S fbs(f, x).
When the weights are taken as either 0 or 1, the optimal solution to the corresponding integer program is equal to bs(f, x).Hence fbs(f, x) is a relaxation of bs(f, x), and we have bs(f, x) ≤ fbs(f, x).
Certificate complexity.An assignment is a map A : {1, . . ., n} → G ∪ { * }.Informally, the elements of G are the values fixed by the assignment and * is a wildcard symbol that can be any letter of G.A string x ∈ S is said to be consistent with A if for all i ∈ [n] such that A(i) = * , we have x i = A(i).The length of A is the number of positions that A fixes to a letter of G.
For an h ∈ H, an h-certificate for f is an assignment A such that for all strings x ∈ A we have f (x) = h.The certificate complexity C(f, x) of f on x is the size of the shortest f (x)-certificate that x is consistent with.The certificate complexity of f is defined as C(f ) = max x∈S C(f, x).
The fractional certificate complexity FC(f, x) of f on x ∈ S is defined as the optimal value of the following linear program: subject to ∀y ∈ S s.t.f (x) = f (y) : Here v x ∈ [0; 1] n for each x ∈ S. The fractional certificate complexity of f is defined as FC(f ) = max x∈S FC(f, x).
When the weights are taken as either 0 or 1, the optimal solution to the corresponding integer program is equal to C(f, x).Hence FC(f, x) is a relaxation of C(f, x), and we have FC(f, x) ≤ C(f, x).
It has been shown that fbs(f, x) and FC(f, x) are dual linear programs, hence their optimal values are equal, fbs(f, x) = FC(f, x).As an immediate corollary, fbs(f ) = FC(f ).
One-sided measures.For Boolean functions with H = {0, 1}, for each measure M from bs(f ), fbs(f ), FC(f ), C(f ) and a Boolean value b ∈ {0, 1}, define the corresponding one-sided measure as According to the earlier definitions, we then have M (f ) = max{M 0 (f ), M 1 (f )}.These onesided measures are useful when, for example, working with compositions of Or with some Boolean function.

Kolmogorov complexity.
A set of strings S ⊂ {0, 1} * is called prefix-free if there are no two strings in S such that one is a proper prefix of the other.Equivalently we can think of the strings as programs for the Turing machine.Let M be a universal Turing machine and fix a prefix-free set S. The prefix-free Kolmogorov complexity of x given y, is defined as the length of the shortest program from S that prints x when given y: For a detailed introduction on Kolmogorov complexity, we refer the reader to [LV08].

Classical Adversary Bounds
Let f : S → H be a function, where S ⊆ G n .The following are all known to be lower bounds on bounded-error randomized query complexity.
Relational adversary bound [Aar06].Let R : S × S → R ≥0 be a real-valued function such that R(x, y) = R(y, x) for all x, y ∈ S and R(x, y) = 0 whenever f (x) = f (y).Then for x ∈ S and an index i, , Rank-1 relational adversary bound.We introduce the following restriction of the relational adversary bound.Let R be any |S| × |S| matrix of rank 1, such that: Note that for every x ∈ S, either u(x) or v(x) must be 0, as R(x, x) must be 0, therefore where θ(x, i) can be simplified to As R(x, y) = 0 whenever f (x) = f (y), we have that for every output h ∈ H either Therefore, CRA 1 (f ) effectively bounds the complexity of differentiating between two non-overlapping sets of outputs.This leads to the following equivalent definition for CRA 1 (f ): Proposition 1.Let A ∪ B = H be a partition of the output alphabet, i.e., A ∩ B = ∅.Let p and q be probability distributions over X := f −1 (A) and Y := f −1 (B), respectively.Then .
For the proof of this proposition see Appendix A.
• Every triple (x, y, i) is assigned a non-negative weight w (x, y, i) such that w (x, y, i) = 0 whenever x i = y i or f (x) = f (y), and w (x, y, i), w (y, x, i) ≥ w(x, y) for all x, y, i such that x i = y i .

Equivalence of the Adversary Bounds
In this section we prove the main theorem: Moreover, for total functions f : G n → H, we have The part CWA(f ) = O(CKA(f )) has been already proven in [LM04].
4 By the argument of [ ŠS06], we take the minimum over the strings instead of the algorithms computing f .

Fractional Block Sensitivity and the Weighted Adversary Method
First, we prove that fractional block sensitivity lower bounds the relational adversary bound for any partial function.
Proposition 3. Let f : S → H be a partial Boolean function, where S ⊆ G n .Then Proof.Let x ∈ S be such that fbs(f, x) = fbs(f ) and denote h = f (x).Let H = H \ {h} and S = f −1 (H ).
Let B be the set of sensitive blocks of x.Let w : B → [0, 1] be an optimal solution to the fbs(f, It is clear that R has a corresponding rank 1 matrix R , as it has only one row (corresponding to x) that is not all zeros.
Let y ∈ S be any input such that R(x, y) > 0. Then for any i ∈ [n] such that ≥ fbs(f ), as 0 < B∈B:i∈B w(B) ≤ 1.On the other hand, note that As mentioned in [LM04], CRA(f ) is a weaker version of CWA(f ).We show that in fact they are exactly equal to each other: Proposition 4. Let f : S → H be a partial Boolean function, where S ⊆ G n .Then Proof.
Suppose that R is the function for which the relational bound achieves maximum value.Let w(x, y) = w(y, x) = w(x, y, i) = w(y, x, i) = R(x, y) for any x, y, i such that f (x) = f (y) and x i = y i .This pair of weight schemes satisfies the conditions of the weighted adversary bound.The value of the latter with w, w is equal to CRA(f ).As the weighted adversary bound is a maximization measure, CRA(f ) ≤ CWA(f ).
Let w, w be optimal weight schemes for the weighted adversary bound.Let R(x, y) = w(x, y) for any x, y ∈ S such that f as w (x, y, i) ≥ w(x, y) by the properties of w, w .Similarly, θ(y, i) ≥ wt(y) v(y,i) .Therefore, for any x, y ∈ S and i ∈ [n] such that f (x) = f (y) and x i = y i , we have max{θ(x, i), θ(y, i)} ≥ max wt(x) v(x, i) , wt(y) v(y, i) .
As the relational adversary bound is also a maximization measure, CRA(f ) ≥ CWA(f ).
The proof of this proposition also shows why CRA(f ) and CWA(f ) are equivalent -the weight function w is redundant in the classical case (in contrast to the quantum setting).

Kolmogorov Complexity and Minimax over Distributions
In this section we prove the equivalence between the mimimax over probability distributions and Kolmogorov complexity adversary bound.It has been shown in the proof of the main theorem of [LM04] that CMM(f ) = Ω(CKA(f )).Here we show the other direction using a well-known result from coding theory.
Proposition 5 (Kraft's inequality).Let S be any prefix-free set of finite strings.Then Proof.Let σ be the binary string for which CKA(f ) achieves the smallest value.Define the set of probability distributions {p x } x∈S on [n] as follows.Let s x = i∈[n] 2 −K(i|x,σ) and p x (i) = 2 −K(i|x,σ) /s x .The set of programs that print out i ∈ [n], given x and σ, is prefix-free (by the definition of S), as the information given to all programs is the same.Thus by Kraft's inequality, we have s x ≤ 1.
Examine the value of the minimax bound with this set of probability distributions.For any x, y ∈ S and i ∈ [n], we have Therefore, CKA(f ) = Θ(CMM(f )).

Fractional Block Sensitivity and Minimax over Distributions
Now we proceed to prove that for total functions, fractional block sensitivity is equal to the minimax over probability distributions.The latter has an equivalent form of the following program.
Lemma 7.For any partial Boolean function f : S → H, where S ⊆ G n , where {v x } x∈S is any set of weight functions v Proof.Denote by µ the optimal value of the given program.
Construct a set of weight functions {v x } x∈S by v x (i) := p x (i) • CMM(f ), where {p x } x∈S is an optimal set of probability distributions for the minimax bound.Then for any x, y such that f (x) = f (y), On the other hand, the value of this solution is given by max • Now we prove that µ ≥ CMM(f ).
Let {v x } x∈S be an optimal solution for the given program.Set s x = i∈[n] v x (i).Construct a set of probability distributions {p x } x∈S by p x (i) = v x (i)/s x .Then for any x, y such that f (x) = f (y), we have Therefore, CMM(f ) ≤ µ.
In this case we prove that for total functions the minimax over probability distributions is equal to the fractional certificate complexity FC(f ).The result follows since FC(f ) = fbs(f ).The proof of this claim is almost immediate in light of the following "fractional certificate intersection" lemma by Kulkarni and Tal: Proposition 8 ([KT16], Lemma 6.2).Let f : G n → H be a total function5 and {v x } x∈G n be a feasible solution for the FC(f ) linear program.Then for any two inputs x, y ∈ G n such that f (x) = f (y), we have Let f be a total function.Suppose that {v x } x∈G n is a feasible solution for the CMM(f ) program.Then for any x, y ∈ G n such that f (x) = f (y), Hence this is also a feasible solution for the FC(f ) linear program.On the other hand, if {v x } x∈G n is a feasible solution for FC(f ) linear program, then it is also a feasible solution for the CMM(f ) program by Proposition 8. Therefore, CMM(f ) = FC(f ).
5 Separations for Partial Functions 5.1 Fractional Block Sensitivity vs. Adversary Bounds Here we show an example of a partial function that provides an unbounded separation between the adversary measures and fractional block sensitivity.
Proof.Let n be an even number and S = {x ∈ {0, 1} n | |x| = 1} be the set of bit strings of Hamming weight 1. Define the "greater than half" function Gth n : S → {0, 1} to be 1 iff For the first part, the certificate complexity is constant C(Gth n ) = 1.To certify the value of greater than half, it is enough to certify the position of the unique i such that x i = 1.The claim follows, as C(f ) ≥ fbs(f ) for any f .
For the second part, by Theorem 2, it suffices to show that CRA Therefore, max{θ(x, i), θ(y, i)} = n/2.Similarly, if i is such an index that y i = 1 and x i = 0, we also have max{θ(x, i), θ(y, i)} = n/2.Also note that R has a corresponding rank 1 matrix R , hence CRA 1 (f ) ≥ n/2 = Ω(n).
We note that a similar function was used to prove lower bounds on the problem of inverting a permutation [Amb00,Aar06].More specifically, we are given a permutation σ(1), . . ., σ(n), and the function is 0 if σ −1 (1) ≤ n/2 and 1 otherwise.With a single query, one can find the value of σ(i) for any i.By construction, a lower bound on Gth n also gives a lower bound on computing this function.

Relational Adversary vs. Kolmogorov Complexity Bound
Here we show that, for a variant of the ordered search problem, the Kolmogorov complexity bound gives a tight logarithmic lower bound, while the relational adversary gives only a constant value lower bound.
For simplicity, further assume that n is even.First, we prove that CKA(f ) = Ω(log n).We use the argument of Laplante and Magniez and the distance scheme method they have adapted from [HNS01]: Proposition 11 ([LM04], Theorem 5).Let f : S → {0, 1} be a Boolean function, where S ⊆ {0, 1} n .Let D be a non-negative integer function on S 2 such that D(x, y) = 0 whenever f (x) = f (y).Let W = x,y:D(x,y) =0 1 D(x,y) .Define the right load RL(x, i) to be the maximum over all values d, of the number of y such that D(x, y) = d and x i = y i .The left load LL(y, i) is defined similarly, inverting x and y.Then For each pair x, y such that f (x) = f (y) and Ind(x) > Ind(y), let D(x, y) = Ind(x)−Ind(y).Then we have On the other hand, since for every x ∈ S and positive integer d there is at most one y such that D(x, y) = d, we have that RL(x, i) = LL(y, i) = 1 for any x, y such that f (x) = f (y) and Now we prove that CRA(Osp n ) ≤ 2. Let N = n/2; we start by fixing an enumeration of S. By x (i) , i ∈ [N + 1], we denote the unique element of S satisfying Ind(x (i) ) = 2i − 2 (it is a negative input for Osp n ); by y (j) , j ∈ [N ], we denote the unique element of S satisfying Ind(y (j) ) = 2j − 1 (it is a positive input for Osp n ).
We claim that for every R = (r ij ), i ∈ [N + 1], j ∈ [N ], with nonnegative entries we have min t =y (j) t max{θ(x (i) , t), θ(y (j) , t)} ≤ 2, unless r ij = 0 for all i, j.Since CRA(Osp n ) is defined only for R which are not identically zero, we conclude that CRA(f ) ≤ 2.

Rank-1 Adversary vs. Relational Adversary
In this section we show a function such that the relational adversary bound CRA(f ) is quadratically larger than the rank-1 relational adversary CRA 1 (f ).First we give an example of a non-Boolean function, and then convert it to a Boolean function with the same separation.
Theorem 12.There exists a function f : S → N, where S ⊆ {0, 1} n , such that Proof.Let n be a perfect square and N 2 = n.For an input x ∈ {0, 1} n , split it into N blocks of N consecutive bits, and denote the j-th bit in the i-th block by x ij .Then define S to be the set of all inputs x such that the Hamming weight of each block is exactly 1.Let f be any injection on S.
First, we prove that CRA(f ) = Ω(N 2 ).Let R(x, y) = 1 iff x and y differ in exactly 2 bits.Pick any two such inputs x and y, and a position i such that x i = y i .W.l.o.g.assume that • The number of z such that x and z differ in 2 bits is N (N − 1), since we can pick any of the N blocks of x and change the position of the single 1 in that block to any of N − 1 other positions.Hence, z∈S R(x, z) = N (N − 1).
• There is only one z such that z i = x i and x and z differ in exactly two bits, as z i = 1.Thus, z∈S,z i =x i R(x, z) = 1 and θ(x, i) = N (N − 1)/1.Therefore, for any x, y, i such that R(x, y) > 0 and x i = y i , we have max(θ(x, i), θ(y, i)) = N (N − 1), and CRA(f ) = Ω(N 2 ).Now we prove that CRA 1 (f ) ≤ N .By Proposition 1, let X, Y be the partition of S and u : X → R, v : Y → R be the probability distributions that achieve CRA 1 (f ) (e.g., Then θ(x, i) = 1 s(v,i,1−x i ) and θ(y, i) = 1 s(u,i,1−y i ) .We prove the following lemma: Lemma 13.For all i ∈ [n], there is a value b ∈ {0, 1} such that Proof.Let p := 1/ CRA 1 (f ).Assume on the contrary that for each b ∈ {0, 1}, either s(u, i, b) > p or s(v, i, b) > p.We distinguish two cases: • For some b, we have s(v, i, b) > p and s(u, i, • W.l.o.g., s(u, i, 0) > p, s(u, i, 1) > p, s(v, i, 0) ≤ p and s(v, i, 1) ≤ p.In that case 2p < s(u, i, 0) + s(u, i, 1) = 1 = s(v, i, 0) + s(v, i, 1) ≤ 2p, a contradiction.Now assume on the contrary that CRA 1 (f ) > N .For b ∈ {0, 1}, let b := 1 − b.Suppose that b i is the value that satisfies the conditions of Lemma 13 for i ∈ First, we prove that z ∈ S. Pick any i ∈ [N ] (any block).Let B = {(i − 1)N + 1, . . ., iN } be the set of variables of the i-th block.Then by the lemma and the assumption.Since x∈X u(x) = 1, there is an x ∈ X such that x ij = z ij for all j ∈ [N ], thus the i-th block of z is a correct Hamming weight 1 block.Since we picked i arbitrarily, each block of z is correct and z ∈ S. Now, we prove that z ∈ X. Examine any x ∈ X that is not z.The inputs x and z differ in at least one block, hence they have 1s in different positions in that block.Thus there is a position i such that z i = 1 and x i = 0. Therefore, we have by the lemma and the assumption.Since x∈X u(x) = 1, it follows that u(z) > 0, thus z ∈ X.
Similarly, we prove that z ∈ Y and we get a contradiction.
We can extend this result to Boolean functions: Theorem 14.There exists a Boolean function f : S → {0, 1}, where S ⊆ {0, 1} n , such that Proof.Let S be the same as in Theorem 12. Define f as For CRA(f ), now define R(x, y) = 1 iff f (x) = f (y) and x and y differ in exactly 2 bits.For any x, we can change the position of any 1 in any block to a position of a different parity in that block in either N/2 or N/2 ways.Therefore, y∈S R(x, y) ≥ N • N/2 = Ω(N 2 ).By the same argument as in the previous proof, we have CRA(f ) = Ω(1).
On the other hand, the argument for the rank-1 adversary from the previous proof works for any X, Y (in this case, X = f −1 (0), Y = f −1 (1)).Hence, we still have CRA 1 (f ) = O(N ).

Limitation of Fractional Block Sensitivity
In this section we show that there is a certain barrier that the fractional block sensitivity cannot overcome for partial functions.

Upper Bound in Terms of Block Sensitivity
Theorem 15.For any partial function f : S → H, where S ⊆ G n , and any x ∈ S, Proof.We will prove that fbs(f, x) ≤ n • bs(f, x) for any x ∈ S. First we introduce a parametrized version of the fractional block sensitivity.Let x ∈ S be any input, B the set of sensitive blocks of x and N ≤ n a positive real number.Define If N/ ≤ √ N k, we are done.Thus further assume that < N/k.Denote g(t) = t + (N − t)(k − 1).We need to find the maximum of this function on the interval [0; ] for a given .Its derivative, Thus it has exactly one root, t 0 = N/ − (k − 1) • /4.Therefore, g(t) attains its maximum value on [0; ] at one of the points {0, t 0 , }.
The last inequality has no solutions in natural numbers for k, so this case is not possible.

Now it remains to find the maximum value of h
The only non-negative root of h ( ) is equal to 0 = N/k.Then h( ) is monotone on the interval [0; N/k].Thus h( ) attains its maximal value at one of the points {0, N/k}.
We also give a simpler proof of the same (asymptotically) upper bound: Theorem 16.For any partial function f : S → H, where S ⊆ G n , and any x ∈ S, Proof.We show that for all x ∈ S, we have FC(f, x) = O( n • bs(f, x)).The claim then follows as fbs(f, x) = FC(f, x).
Since FC(f, x) is a minimization linear program, it suffices to show a fractional certificate v of size at most O( n • bs(f, x)).Let k be a parameter between 1 and n.Let B = {B ⊆ [n] | f (x) = f (x B ), |B| ≤ k} be a maximum set of non-overlapping sensitive blocks of x of size at most k.Then |B| ≤ bs(f ).Let S = B∈B B be the set of all positions in blocks of B. We construct the fractional certificate v by setting v(i) = 1 for all i ∈ S, and v(i) = 1/k for all i / ∈ S.
Let B be any sensitive block of x of size at most k.As B is a maximum set of nonoverlapping sensitive blocks, there must exist a B ∈ B such that B ∩ B = ∅.Therefore, The last expression asymptotically reaches the minimum when bs(f )
Proof.Take any finite projective plane of order t, then it has = t 2 + t + 1 many points.Let n = k and enumerate the points with integers from 1 to .Let X = {0 } and Y = {y | there exists a line L such that y i = 1 iff i ∈ L}.Define the (partial) finite projective plane function Fpp t : X ∪ Y → {0, 1} as Fpp t (y) = 1 ⇐⇒ y ∈ Y .
We can calculate the 1-sided block sensitivity measures for this function: • fbs 0 (Fpp t ) ≥ (t 2 + t + 1) • 1 t+1 = Ω(t), as each line gives a sensitive block for 0 n ; since each point belongs to t + 1 lines, we can assign weight 1/(t + 1) for each sensitive block and that is a feasible solution for the fractional block sensitivity linear program.
• bs 0 (Fpp t ) = 1, as any two lines intersect, so any two sensitive blocks of 0 n overlap.
• bs 1 (Fpp t ) = 1, as there is only one negative input.
Next, define f : S ×k → {0, 1} as the composition of Or with the finite projective plane function, f = Or k (Fpp t (x (1) ), . . ., Fpp t (x (k) )).By the properties of composition with Or (see Proposition 31 in [GSS16] for details), we have As n • n/t 2 = n/t, we have fbs(f ) = Ω( n • bs(f )) and hence the result.
Note that our example is also tight in regards to the multiplicative constant, since t can be unboundedly large (and the constant arbitrarily close to 1).

Open Ends
Limitation of the Adversary Bounds.In the quantum setting, the certificate barrier shows a limitation on the quantum adversary bounds.In the classical setting, by our results, fractional block sensitivity characterizes the classical adversary bounds for total functions and thus is of course an upper bound.Is there a general limitation on the classical adversary methods for partial functions?Block Sensitivity vs. Fractional Block Sensitivity.We have exhibited an example with the largest separation between the two measures for partial functions, bs(f ) = O( n • bs(f )).For total functions, one can show that fbs(f ) ≤ bs(f ) 2 , but the best known separations achieve fbs(f ) = Ω(bs(f ) 3/2 ) [GSS16, APV18].Can our results be somehow extended for total functions to close the gap?
A Rank-1 Relational Adversary Definition Proof of Proposition 1.Let u, v be vectors that maximize CRA 1 (f ).Let h ∈ H be any letter and S h = f −1 (h).Since for every x, y, such that f (x) = f (y), we have u(x)v(y) = 0, it follows that either u(x) = 0 for all x ∈ S h or v(x) = 0 for all x ∈ S h .Therefore, we can find a partition A ∪ B = H such that: This partition therefore also defines a partition of the inputs, X ∪ Y = S, where X = f −1 (A) and Y = f −1 (B).Now, notice that θ(x, i) does not depend on the particular choice of x if x i := g 1 ∈ G is fixed.Similarly, let y i := g 2 ∈ G be fixed, then θ(y, i) does not depend on the particular choice of y.This allows to simplify the expression for CRA 1 (f ), since for each i we can fix values g 1 = g 2 (such that there exist x ∈ X, y ∈ Y with u(x)v(y) > 0 and x i = g 1 and y = g 2 ) and ignore the remaining components of x, y, i.e., Further assume that both X and Y are non-empty, because otherwise the value of CRA 1 would not be defined.Notice that multiplying either u or v with any scalar does not affect the value of CRA 1 .Hence, we can scale u and v to probability distributions p and q over X and Y , respectively.More specifically, we can further simplify CRA 1 : CRA 1 (f ) = max A,B: A∪B=H max p,q min i∈[n], g 1 ,g 2 ∈G:g 1 =g 2 : ∃x∈X,y∈Y : x i =g 1 ,y i =g 2 , p(x)q(y)>0 1 min y∈Y y i =g 1 q(y),
We can further simplify this definition if the inputs are Boolean: Proof.For g 1 , g 2 ∈ {0, 1}, g 1 = g 2 implies g 2 = g 1 ⊕ 1.It follows that CRA 1 (f ) = max A,B, p,q min i∈[n], b∈{0,1}: ∃x∈X,y∈Y : x i =b,y i =b, p(x)q(y)>0 Moreover, we can drop the requirement ∃x ∈ X, y ∈ Y : x i = b, y i = b, p(x)q(y) > 0. To see that, fix any p, q, and consider the quantities We also note that CRA 1 (f ) can be found the following way.Let A ∪ B = H be any suitable partition of H and denote CRA 1 (f, A, B) = max p,q min i∈[n], g 1 ,g 2 ∈G:g 1 =g 2 : ∃x∈X,y∈Y : x i =g 1 ,y i =g 2 , p(x)q(y)>0 1 min {Pr y∼q [y i = g 1 ], Pr x∼p [x i = g 2 ]} .
Then CRA 1 (f ) = max A,B CRA 1 (f, A, B).On the other hand, for each fixed partition A, B the value CRA 1 (f, A, B) can be found from the following program: x i =g 2 w x , y∈Y : The proof is analogous to that of Lemma 7.
Proof.Denote the optimal value of this program by µ.Then µ ≤ CRA 1 (f, A, B), since we can take p(x) = w x /µ, q(y) = w y /µ (where {w x } x∈S ) is an optimal solution of the program).This way we obtain a feasible solution for CRA 1 (f ), which gives min x i =g 2 p(x), y∈Y : x i =g 2 w x , y∈Y : for each i ∈ [n], g 1 , g 2 ∈ G such that g 1 = g 2 and there exist x ∈ X, y ∈ Y with x i = g 1 and y i = g 2 , thus CRA 1 (f, A, B) ≥ µ.
Let us show the converse inequality.If the probability distributions p, q provide an optimal solution for CRA 1 (f, A, B), then w x = p(x)•CRA 1 (f, A, B) and w y = q(y)•CRA 1 (f, A, B) gives a feasible solution for the program and the value this solution is x∈X w x = CRA 1 (f, A, B).Hence, also CRA 1 (f, A, B) ≤ µ.
For Boolean outputs, the partition of H can be fixed to A = {0}, B = {1}, giving a single program.For Boolean inputs, the condition g 1 , g 2 ∈ G, g 1 = g 2 , w x w y > 0 can be replaced simply by b ∈ {0, 1} by Proposition 18.Therefore, for Boolean functions this program can be recast as a mixed-integers linear program, providing an algorithm for finding CRA 1 (f ).
fbs N (f, x) = max w B∈B w(B) s.t.∀i ∈ [n] : B∈B:i∈B w(B) ≤ 1, B∈B |B| • w(B) ≤ N. where w : B → [0; 1].If we let N = n, then the second condition becomes redundant and fbs n (f, x) = fbs(f, x).For simplicity, let k = bs(f, x).We will prove by induction on k that fbs N (f, x) ≤ √ N k.If k = 0, the claim obviously holds, so assume k > 0. Let be the length of the shortest block in B.Then B∈B • w(B) ≤ B∈B |B| • w(B) ≤ N and fbs N (f, x) = B∈B w(B) ≤ N/ .On the other hand, let D be any shortest sensitive block.Let f be the restriction of f where the variables with indices in D are fixed to the values of x i for all i ∈ D. Note that bs(f , x) ≤ k − 1, as we have removed all sensitive blocks that overlap with D. Let B be the set of sensitive blocks of x on f and let T = {B ∈ B | B ∩ D = ∅}, the set of sensitive blocks that overlap with D (including D itself).Then no T ∈ T is a member of B , thereforeB ∈B |B | • w(B ) ≤ N − T ∈T |T | • w(T ) ≤ N − • T ∈T w(T ).Denote t = T ∈T w(T ).We have that t ≤ |D| = , as any T ∈ T overlaps with D. By combining the two inequalities we get fbs N (f, x) ≤ max ∈[0;n] min N , max t∈[0; ] t + fbs N − t (f , x) N − t)(k − 1) .
∃x∈X,y∈Y : x i =b,y i =b, p(x)q(y)>0 min Prx∼p [x i = b], Pr y∼q [y i = b] β = max i∈[n] b∈{0,1} min Pr x∼p [x i = b], Pr y∼q [y i = b] .Clearly, α ≤ β.To show the converse inequality, consider any i ∈ [n] and (if such exists) b ∈ {0, 1} satisfying u(x)v(y) = 0 for any x ∈ X, y ∈ Y with x i = b, y i = b (to deal with the possibility no such x, y exist, we consider the empty sum to be zero).Then also0 = x∈X,y∈Y x i =b, y i =b p(x)q(y) x∼p [x i = b] • Pr y∼q [y i = b].Therefore, min {Pr x∼p [x i = b], Pr y∼q [y i = b]} = 0 ≤ α.Thus α = β.Thus the claim follows.

Proposition 19 .
Let f : S → H, where S ⊆ G n .Let A ∪ B = H be any partition of H such that A, B = ∅.Let X = f −1 (A) and Y = f −1 (B).The value of CRA 1 (f,A, B) is equal to the optimal solution of the following program: