Parameterized Property Testing of Functions

We investigate the parameters in terms of which the complexity of sublinear-time algorithms should be expressed. Our goal is to find input parameters that are tailored to the combinatorics of the specific problem being studied and design algorithms that run faster when these parameters are small. This direction enables us to surpass the (worst-case) lower bounds, expressed in terms of the input size, for several problems. Our aim is to develop a similar level of understanding of the complexity of sublinear-time algorithms to the one that was enabled by research in parameterized complexity for classical algorithms. Specifically, we focus on testing properties of functions. By parameterizing the query complexity in terms of the size r of the image of the input function, we obtain testers for monotonicity and convexity of functions of the form f:[n]→ R with query complexity O (log r), with no dependence on n. The result for monotonicity circumvents the Ω (log n) lower bound by Fischer (Inf. Comput. 2004) for this problem. We present several other parameterized testers, providing compelling evidence that expressing the query complexity of property testers in terms of the input size is not always the best choice.


INTRODUCTION
In this article, we set out to investigate the parameters in terms of which the complexity of sublinear-time algorithms should be expressed. Our goal is to find input parameters that are tailored to the combinatorics of the specific problem being studied and design algorithms that run faster when these parameters are small. This direction could enable one to surpass the (worstcase) lower bounds on the problem complexity that are usually expressed in terms of the input size. The spirit of our study is similar to that in the field of parameterized complexity. In parameterized complexity, the focus is on expressing the complexity of problems as a function of one or more input parameters in order to obtain a fine-grained complexity classification, for example, of NP-hard problems. Our aim is to develop a similar level of understanding of the complexity of sublinear-time algorithms to the one that was enabled by research in parameterized complexity for classical algorithms.
We focus our study on the framework of property testing, introduced by Goldreich et al. [31] and Rubinfeld and Sudan [45]. In property testing, an algorithm (an ε-tester) for property P, where P is viewed as a class of functions, is given a parameter ε ∈ (0, 1) as input and has oracle access to a function f . The tester has to accept with probability at least 2/3 if f belongs to the class P, and reject with probability at least 2/3 if f is ε-far from P, that is, differs from every function in P on at least an ε fraction of function values. In the context of property testing of functions, the query complexity of a tester is usually expressed in terms of ε and the size of the domain of the input function. This works well for properties whose query complexity depends only on the proximity parameter ε. However, for other properties, it is not clear whether the domain size is the right parameter to express their testing complexity.
Consider, for example, the widely studied problem of testing monotonicity of real-valued functions (see, e.g., [1, 3, 5, 6, 10-14, 16-21, 24-27, 29, 30, 34, 35, 38, 39] and recent surveys [15,43]). For functions over a discrete domain [n] (also called the line), monotonicity testing is equivalent to testing sortedness of arrays. Algorithms for sortedness testing have found use, for instance, in determining the "state of sortedness" of relational databases [7], where the testing step is performed to decide on the sorting algorithms to be run on the database. The complexity of sortedness testing (for constant ε) is Θ( √ n) if the tester is only allowed to make independent and uniformly random queries [29]; it is Θ(log n) if the tester is allowed to make arbitrary queries [26,27].
From the above discussion, it might appear that one cannot make any more improvements to the complexity of monotonicity testing over [n]. However, we argue that this is the case only when the complexity of the problem is parameterized in terms of n, the domain size.
In this work, we ask whether better monotonicity testers can be designed by parameterizing the query complexity in terms of the size of the image of the input function. The starting point for our investigation is the folklore result that, for ε-testing monotonicity of Boolean functions over [n], only O (1/ε) queries suffice. A slightly more general corollary of this result is that monotonicity of functions over [n] with image size at most two can be ε-tested with only O (1/ε) queries. The only bound for monotonicity testing (over [n]) that is expressed in terms of the image size r of the input function is the bound of Ω(log r ) for nonadaptive 1 testers due to Blais et al. [13]. We design an ε-tester for monotonicity of functions over [n] with query complexity O ((log r )/ε), where r is an upper bound on the size of the image of the input function. This result circumvents Fischer's lower bound of Ω(log n) for this problem by focusing on a different parameter for measuring query complexity.
The size of the image is one of the natural parameters in terms of which one can express the complexity of property testing algorithms. In this work, we show that there are several testing problems for which parameterizing the complexity in terms of the image size works well. Another example where parameterization has helped in the design of efficient testers is the work of Jha and Raskhodnikova [37] on Lipschitz testing, even though they do not view their results from this angle. The complexity of their testers is expressed in terms of the image diameter. The image diameter of a function f : D → R is max x,y ∈D | f (x ) − f (y)|. In many situations, the image diameter is much smaller than the domain size. We believe that all this evidence is compelling enough to make one rethink the way in which the complexity of sublinear-time algorithms is expressed.
Our article is a first step toward formalizing this notion and finding what we think are the right parameters to express the complexity of some central problems in sublinear-time algorithms.

Parameters and Properties Studied in This Work
We study the dependence of complexity of monotonicity and convexity testers on the image size of the input functions. The image of a function is defined as follows. For the special case, when D is [n], a function f : [n] → R can also be viewed as a real-valued array of length n. Here, Im( f ) is equal to the set of distinct values in the array.
We restrict our attention to real-valued functions defined over the following domains. These are domains for which testing monotonicity and convexity have been studied extensively.
The partial order ([2] d , ) is called a hypercube and the total order ([n], ) is called a line.
Next, we summarize some of the previous work on testing monotonicity and convexity of realvalued functions.
Convexity. For a convex set D, a function f : For real-valued functions over [n], convexity can be ε-tested using O ( log n ε ) queries [42]. This bound is tight (for constant ε) for nonadaptive testers [13].

Our Results
In this section, we describe the key technical contributions of our work. We design efficient testers for monotonicity over various hypergrid domains and convexity over the line. For monotonicity of functions over the line [n], which is equivalent to the property of sortedness of arrays of length n, we design efficient testers under two different models of input access: (i) query access and (ii) uniform samples. Our testers are given an upper bound r on the image size of the input function, and their complexity is parameterized in terms of r . In addition to analyzing query (sample) complexity of our algorithms, we also analyze their running times in the model where each oracle query takes unit time.
Sortedness Testing. We present our tester for sortedness of n-element arrays (monotonicity over the line [n]) in Section 3. The complexity of our tester is independent of n. Our tester has 1-sided error; that is, it always accepts a function with the property. (In contrast, the general tester is said to have 2-sided error.) We prove the following theorem. An important ingredient in our sortedness tester is a nearly optimal nonadaptive tester for this task, presented in Section 2. Its performance is summarized in the next theorem. The query complexity of our nonadaptive tester matches (for constant ε) the Ω(log r ) lower bound for nonadaptive sortedness testers in Reference [13]. Note that for r ≥ 1/ε, the complexity of the nonadaptive tester is O ( log r ε ). The tester that we design to prove Theorem 1.3 runs the nonadaptive tester for r ≥ 1/ε and a different (adaptive) tester, presented in Section 3, for r < 1/ε.
Uniform Sortedness Testing. The work that defined property testing [31], in addition to the model with oracle access to the input, also considered testers that are allowed access to function values only at points sampled uniformly and independently at random from the domain. This model of property testing, known as uniform or sample-based testing, was further studied by Goldreich and Ron [33], Fischer et al. [28], Berman et al. [9], and Berman et al. [8]. The query complexity of ε-testing sortedness of n-element arrays (for constant ε) using only uniformly and independently drawn samples is Θ( √ n) [29]. We design optimal (up to the dependence on ε) uniform testers whose query complexity is parameterized in terms of the number of distinct elements in the input arrays. These results can be found in Sections 5 and 6. Note that our tester has a better complexity (up to log factors) than the optimal tester for monotonicity of real-valued functions over the hypergrid domains that makes O ( d log n ε ) queries [17] for small r . Parameterizing the complexity of testing in terms of the image size of the functions being tested is what enables us to bypass the Ω( d log n ε ) lower bound for monotonicity testing of functions over hypergrid domains in Reference [18].
Convexity Testing over the Line. Finally, in Section 7, we give a nonadaptive convexity tester for real-valued functions over the line and prove the following theorem. Recall that for real-valued functions over [n], the complexity of (nonadaptively) ε-testing convexity (for constant ε) is Θ(log n). Contrary to this, our tester makes only a constant number of queries when the image size of the function is small.

Related Work
A related concept of parameterized testability of graph properties was studied by Iwama and Yoshida [36]. The focus of their work was to design efficient algorithms for the property testing variants of several NP-hard decision problems on graphs, by expressing their complexity in terms of parameters that have been successfully used in the literature on parameterized algorithms. In most of the cases, the parameters that they used are NP-hard to compute. In contrast, our goal is to determine the right input parameters in terms of which to express the complexity of property testers and, more generally, sublinear-time algorithms. The parameters we use are often easy to compute or estimate and, in many situations, can be assumed to be given to the algorithm. We also believe that the parameters that we use are tied to the intrinsic combinatorial structure of the properties and give insights into complexity of testing them. Another related work is the survey by Newman [40] on property testing in the massively parameterized model. Its main focus is on how, for a fixed property (specifically of graphs), the testing problem changes with changing underlying structure in the domain. In other words, the parameterization here is with respect to the domain and the bounds on the query complexity of testers for a property are determined by specific subclasses of graphs (e.g., graphs with a minimum fixed girth). Our work considers parameterization more generally. In particular, most of our results focus on parameterizing based on the range of values that functions take.

THE NONADAPTIVE SORTEDNESS TESTER
In this section, we describe a nonadaptive, 1-sided error ε-tester for sortedness of arrays containing at most r distinct values and prove Theorem 1.4. Our tester (Algorithm 1) uses a proximity oblivious tester (POT) for sortedness as a subroutine. [32]). A proximity oblivious tester for a property P is an algorithm that has oracle access to a function f and (1) always accepts if f ∈ P;

Definition 2.1 (POT, Goldreich and Ron
(2) rejects with probability at least dist( f , P) if f P, where dist( f , P) is the minimum fraction of values in f that needs to be changed, so that f ∈ P.
Observe that a POT for P can be repeated O (1/ε) times to obtain a 1-sided error ε-tester for P. We note that Definition 2.1 is a special case of the definition of POT in Reference [32]. Specifically, Goldreich and Ron [32] allow the rejection probability of a POT to be a non-decreasing function of dist( f , P). However, the special case in Definition 2.1 is sufficient for our purposes. We now give an overview of Algorithm 1. It runs for O (1/ε) iterations. In each iteration, it first runs a POT for sortedness on a subarray B of the input array A consisting of 1 + 2r /ε (nearly) equally spaced indices. Next, it picks an index i ∈ [n] uniformly at random. It compares A[i] with the array values of the indices closest to i that were included in B. Algorithm 1 rejects if either of these steps finds elements out of order.
At least three distinct POTs for sortedness of arrays with O (log n) query complexity are known [11,17,26]. We can use any of them in Algorithm 1. Note that Algorithm 1 is not proximity oblivious itself, as it uses the proximity parameter ε to determine its queries. For simplicity, we assume throughout that 2r /ε is an integer that divides n.
Proof of Theorem 1.4. We prove that Algorithm 1 is a nonadaptive, 1-sided error ε-tester making O ( 1 ε log r ε ) queries to test sortedness of arrays with at most r distinct values. Algorithm 1 is nonadaptive, since its queries can be chosen in advance. It has 1-sided error as it always accepts sorted arrays. Lemmas 2.2, 2.3, and 2.5 complete the proof of Theorem 1.4.

ALGORITHM 1: The Nonadaptive Sortedness Tester
Input: query access to an array A of size n, an upper bound r on the number of distinct values in A, and a distance parameter ε ∈ (0, 1). 1 Let B be the subarray of A consisting of the indices 1, εn 2r , 2εn 2r , . . . , 2r ε − 1 εn 2r , n. // No need to explicitly construct B.

4
Query an index i from A uniformly at random.
Steps 4-7 make a constant number of queries. Steps 3-7 are executed O (1/ε) times. Hence, the overall query complexity of the tester is O ( 1 ε log r ε ). Recall that an array is ε-far from sorted if at least an ε fraction of elements needs to be modified to make it sorted; otherwise, it is ε-close to sorted. Lemma 2.3. Algorithm 1, with probability at least 2/3, rejects every array that has at most r distinct values and is ε-far from sorted.
Proof. Consider an array A that has at most r distinct values and is ε-far from sorted. Let B be the subarray of A as defined in Step 1 of Algorithm 1. If B is ε 7 -far from sorted, then by the definition of POT for sortedness, Step 3 of our tester rejects with probability at least ε 7 in each iteration. In the rest of the proof, we consider the case when B is ε 7 -close to sorted. Claim 2.4. If B is ε 7 -close to sorted, then Steps 4-7 reject with probability at least ε 7 in each iteration.
Proof. The subarray B consists of 1 + 2r /ε (nearly) equally spaced indices, which partition A into 2r /ε intervals of nearly the same size. Let I = {I 1 , I 2 , . . . , I 2r /ε } denote the set of these intervals. Here, I 1 denotes the interval 2 We now prove Claim 2.4 as follows in two steps. First, we show that I k ∈I D(I k ) > εn/7, where I = {I k ∈ I : I k is nearly-constant}. Second, we show that Steps 4-7 of Algorithm 1 reject with probability at least k ∈I D(I k )/n in each iteration.
Since B is ε 7 -close to sorted, there exists a set S of at most ε |B|/7 indices in B whose values can be changed to make B sorted. Note that, for r ≥ 3, we have |S | < r /3 since |B| = 1 + 2r /ε. Consider the set of intervals E 1 in I adjacent to at least one index from S. As each index in S is adjacent to at most two intervals, then I k is nearly-constant. The total number of distinct values taken by the elements belonging to intervals in E 2 is at least |E 2 |. But A has at most r distinct values, and hence, |E 2 | ≤ r . Consequently, denote the absolute Hamming distance of the array A to the sortedness property.
Note that all the intervals in A are nearly-constant. Hence, (I \ (E 1 ∪ E 2 )) ⊆ I and, consequently, This completes the first step of the proof. Consider a nearly-constant interval

Algorithm 1 rejects if it samples an index
Steps 4-7 of Algorithm 1 reject A with probability at least I k ∈I D(I k )/n. Since I k ∈I D(I k ) > εn/7, the proof of Claim 2.4 is complete.
Step 1 introduces notation and is not a step of the algorithm. The time complexity of Steps 4-7 run in constant time. Hence, the running time of each iteration of Steps 3-7 is O (log r ε ). As these steps are executed O (1/ε) times, the time complexity of Steps 2-7 is O ( 1 ε log r ε ), which is also the overall time complexity of Algorithm 1.

THE SORTEDNESS TESTER WITH O (
log r ε ) QUERY COMPLEXITY In this section, we describe a 1-sided error ε-tester for sortedness of arrays containing at most r distinct values and prove Theorem 1.3. The tester, described in Algorithm 2, runs the nonadaptive tester (Algorithm 1) described in Section 2 when r ≥ 1/ε, and a different procedure, which is described in Algorithm 2, otherwise.
We first give a high-level overview of Algorithm 2 when r < 1/ε. For this case, as we show later in the analysis, r log r ε = O ( log r ε ), so we aim to have query complexity O (r log r ε ). We use adaptive queries to (roughly) determine, for each value in the range, the first and the last indices in the array where it appears. This is done in Steps 2-10. If the tester does not find any violations to sortedness in these steps, then the violations to sortedness could appear only in the nearly-constant intervals formed by the extreme indices corresponding to each of the at most r values. We check for such violations in Steps 11-13. 14 Accept.
Proof of Theorem 1.3. We prove that Algorithm 2 is a 1-sided error ε-tester for sortedness of arrays with at most r distinct values and that its query and time complexity are both O (   successor-distance(v * , L k 1 ) < n and εn 4r < successor-distance(v * , L k q ) ≤ εn 2r . Solving for q, we get Hence, the tester runs at most log(8r /ε) iterations where successor-distance(v * , ·) is halved. Accounting for all the iterations for each value in V w \ {A[n]}, we get w < |V w | · log(8r /ε) ≤ r log(8r /ε), since |V w | ≤ r . In each iteration, the tester makes a constant number of queries. So, the overall query complexity of Steps 4-10 is O (r log r ε ). The query complexity of Steps 11-13 is O (1/ε).

Hence, the overall query complexity of the tester is
Hence, for r < 1/ε, we have r log r < 1/ε log(1/ε ) , and hence, r log 1 ε < log r ε . Therefore, the query complexity of Algorithm 2 is O ( log r ε ). Lemma 3.2. Steps 2-14 of Algorithm 2, with probability at least 2/3, reject every array that has at most r distinct values and is ε-far from sorted, when r < 1/ε.
Proof. Consider an array A that has at most r distinct values and is ε-far from sorted, where r < 1/ε. Algorithm 2 rejects whenever it finds elements out of order. We show that Steps 11-13 reject with probability at least 2/3, if Steps 2-10 do not find array elements out of order. Consider For r < 1/ε, Steps 2-14 are executed. We maintain the set L as a doubly linked list. The main idea of implementing Steps 4-10 efficiently is to maintain a pointer p to the smallest index in the list L that satisfies the condition. The pointer p is initialized to point to 1, which is present in L in the first iteration of Step 4. We repeat Steps 4-10 until the index that p points to either no longer satisfies the while condition or is deleted from L. In both cases, we update p to point to the successor of the current index. In order to efficiently implement Steps 11-13, we first copy the indices in L into an array D of size |L|. Note that D is a sorted array.
Step 13, which involves finding the successor and predecessor of a sampled index, can then be performed by doing a binary search in D, which is a sorted array. Steps

THE MONOTONICITY TESTER OVER HYPERGRIDS
In this section, we describe a monotonicity tester for functions over hypergrid domains and prove Theorem 1.7. We prove the correctness of this tester using the correctness of the sortedness tester described in Section 3, a dimension reduction theorem by Chakrabarty et al. [16] and the work investment strategy by Berman et al. [10].
An axis-parallel line of the hypergrid [n] d is a set of n points that agree on all but one coordinate. Let f | denote the restriction of a function f to . Note that f | can be thought of as a real-valued function over [n]. The tester iteratively samples uniformly random axis-parallel lines, runs Algorithm 2 on each of them, and rejects if any run of Algorithm 2 rejects. We now analyze the tester and prove Theorem 1.7.
Proof of Theorem 1.7. We prove that Algorithm 3 is a 1-sided error ε-tester for monotonicity of real-valued functions f :  Proof. Let f : [n] d → R be ε-far from monotone, with |Im( f )| ≤ r . Let L n,d denote the set of all axis-parallel lines in [n] d and d M ( f ) denote the relative distance of f to monotonicity. We also use d M ( f | ) to denote the relative distance to monotonicity of the function f | . We have |Im( f | )| ≤ r since |Im( f )| ≤ r . We use the following dimension reduction theorem proved by Chakrabarty et al. [16]. Theorem 4.3 (Chakrabarty et al. [16]).
We note that Theorem 4.3 is a special case of the dimension reduction theorem proved in Reference [16].
. We use the work investment strategy due to Berman et al. [10] to extend the monotonicity tester on the line domain to the hypergrid domain.

Theorem 4.4 (Berman et al. [10]). For a random variable
and δ ∈ (0, 1) be the desired probability of error. Let k i = 4 ln 1/δ 2 i μ . Then, Note on a Nonadaptive Tester for Hypergrids. We can get a nonadaptive, 1-sided error ε-tester for monotonicity over hypergrids by using Algorithm 1 instead of Algorithm 2 in Step 4 of Algorithm 3. The same analysis goes through for this case and the overall query complexity of the tester is O ( d ε log d ε log rd ε ).

THE UNIFORM TESTER FOR SORTEDNESS
In this section, we first describe a nonadaptive ε-tester that makes O ( √ r /ε) uniform and independent queries to test sortedness of arrays containing at most r distinct values. The expected running time of the tester is O ( √ r /ε). We then show how to use this tester to obtain another tester that meets the requirements of Theorem 1.5.
Recall that a pair of indices (x, y), where x, y ∈ [n] and x < y, is violated in an array A if A(x ) > A(y). Two indices x and y are adjacent in a sample S if there is no index z ∈ S such that x < z < y. Algorithm 4 uses the fact that there is a violated pair in a sample of indices if and only if there is a violated pair consisting of adjacent indices in that sample.
The bound on the query complexity of the tester follows directly from its description. The tester has 1-sided error as it always accepts sorted arrays. In the rest of the section, we show that the time complexity of the tester is O ( √ r /ε) and that, with high probability, the tester rejects arrays that are ε-far from sorted.  [25,Lemma 7] show that if A is ε-far from sorted then G has a matching M of size at least εn/2.
For a pair (x, y) ∈ [n] × [n] such that x < y, we refer to x as its lower endpoint and y as its higher endpoint. We first partition the pairs in M into r classes as follows. Let v 1 < v 2 < · · · < v r be the values in the range. A pair (x, y) ∈ M such that x < y belongs to the i th class Proof of Theorem 1.5. Let c be a constant such that the expected running time of Algorithm 4 is at most c · √ r /ε. The tester as described in the statement of Theorem 1.5, say T , can be obtained by running Algorithm 4 for exactly 12c · √ r /ε steps and rejecting if and only if Algorithm 4 rejects. The query complexity of T is O ( √ r /ε). It is easy to see that T accepts if the array is sorted. If the array is ε-far from sorted, the tester T accepts if either the execution of Algorithm 4 accepts, or its running time exceeds 12c √ r /ε. Using both Markov's inequality and the union bound, we can see that the probability of T accepting in this case is at most 1/3. This completes the proof of Theorem 1.5.

A LOWER BOUND FOR THE UNIFORM SORTEDNESS TESTER
In this section, we prove that Ω( √ r ) uniform queries are required to test sortedness of an array with at most r distinct values, even when one allows for 2-sided error, and prove Theorem 1.6. The proof uses Yao's principle [48], the version with two distributions (see, e.g., Raskhodnikova and Smith [44]). We first define two hard distributions P and N on arrays with r distinct values such that every array drawn from P is in sorted order and every array drawn from N is 1 8 -far from sorted. We then show that, for any tester that uses o( √ r ) uniform queries, the statistical difference between tester's views of the two distributions is small, and hence, with high probability, it cannot distinguish between the distributions.
The statistical distance between two distributions D 1 and D 2 , denoted by SD(D 1 , D 2 ), is defined as We write D 1 ≈ δ D 2 to denote SD(D 1 , D 2 ) ≤ δ .
Proof of Theorem 1.6. First, we define two distributions P and N on arrays of size n taking values in the set [r ], where n ≥ 16r ln 6r . Without loss of generality, we assume that r is an even number that divides n.
The distribution P is constructed as follows. Partition an n-element array into r /2 blocks, each of length 2n/r . For i ∈ [r /2], set all elements in the i th block to the same value; choose this value to be 2i with probability 1 2 and 2i − 1 with probability 1 2 . The distribution N is constructed as follows. As before, partition an n-element array into r /2 blocks, each of length 2n/r . For i ∈ [r /2], the value at each index in the i th block is set to either (2i − 1) or 2i uniformly and independently at random.
Note that every array drawn from P is in sorted order. We will show that, with high probability, an array drawn from N is 1 8 -far from sorted. Proof. Consider an array A chosen according to N . Consider the i th block of A for some i ∈ [r /2]. Let Y 2i denote the number of elements with value 2i in the first half of this block and Y 2i−1 denote the number of elements with value (2i − 1) in the second half of the block. As the size of each half of the block is n/r , and the value at each index is assigned either (2i − 1) or 2i uniformly and independently at random, By a Chernoff bound, for all i ∈ [ r 2 ], If Y 2i > n/4r and Y 2i−1 > n/4r , then at least n/4r elements in the i th block need to be changed to make it sorted, as all the indices with value 2i in the first half or all the indices with value 2i − 1 in the last half need to be changed. By the union bound, With probability at least 5/6, we have Y 2i > n/4r and Y 2i−1 > n/4r for all i ∈ [r /2]. This implies that at least n/4r elements need to be changed in each of the r /2 blocks to make them all sorted. Hence, with probability at least 5/6, the array A is 1 8 -far from sorted.
Denote the conditional distribution N | E by N , where E denotes the event that an array chosen according to N is 1 8 -far from sorted. Any instance sampled according to N is 1 8 -far from sorted. The statistical distance SD( N , N ) can be bounded using the following lemma proven by Raskhodnikova and Smith [44]. Applying Lemma 6.2 to N and N , we get N ≈ 1/5 N . Consider any 1 8 -tester for sortedness that makes q queries where q ≤ √ r /5. Define P-view to be the distribution of values at the q locations queried by the tester in an array sampled according to P. Similarly, define N -view and N -view. Next, we show that it is hard to distinguish P-view from N -view.
Proof. Let F denote the event that at least two out of the tester's q uniform samples from an array A are from the same block. An upper bound on the probability of this event can be obtained using the following lemma. Lemma 6.4 (Bellare and Rogaway [4]). Consider q balls and N bins, where each ball is assigned uniformly and independently at random to one of the bins. The probability that there exists a pair of balls assigned to the same bin is at most ( 1 ) Since N ≈ 1/5 N , the definition of statistical difference implies that It remains to show that P-view| F = N -view| F . Let x be an index in the i th block, for some i ∈ If F holds, then at most one index from each block is sampled by the tester. By the definition of P and N , for any two indices from different blocks, the values assigned to them are independent of each other. Hence, P-view| F = N -view| F . By Equations (1)-(3), This completes the proof of Lemma 6.3.
By Yao's principle [48], as stated in Reference [44,Claim 5], for q ≤ √ r /5, it is hard for any 1 8 -tester using q uniform queries to distinguish P from N . Thus, uniform testers for sortedness of arrays with values in [r ] require Ω( √ r ) queries. This completes the proof of Theorem 1.6.

TESTING CONVEXITY
In this section, we describe a nonadaptive tester for convexity of functions f : [n] → R and prove Theorem 1.8. Recall that a function f : Our convexity tester is Algorithm 5. It uses the nonadaptive convexity tester of Parnas et al. [42] as a black box. The query complexity of our tester is O (1/ε) when r < εn/3, as is evident from its description. In the other case, n ≤ 3r /ε, our tester runs the tester of Reference [42], which makes O (log n/ε) queries. Substituting the upper bound on n, we get the query complexity bound claimed in Theorem 1.8. The arguments for the bounds on the time complexity are the same as that for the query complexity.
Given a function f : [n] → R and a set S ⊆ [n], let f S denote the restriction of f to the indices in S whenever S ∅. To prove the correctness of our tester, we first prove the following characterization of convex functions with image size at most r . Θ( d ε ). For Boolean functions, however, Khot et al. [38] show that monotonicity over {0, 1} d can be tested with O ( √ d ε 2 ) queries. The open question is whether one can parameterize monotonicity testing over the hypercube with respect to the image size r and circumvent the lower bound of Ω( d ε ). Investigating the parameterized complexity of monotonicity testing of real-valued functions over arbitrary partial orders is another interesting direction. The only ε-tester known for monotonicity of real-valued functions over an arbitrary partial order D is a uniform tester that makes O ( |D | ε ) queries. It is unclear whether the techniques that we used to parameterize uniform testing of monotonicity over the line [n] will extend directly to more general domains. -As evidenced by our work, image size of functions is a parameter that captures the finegrained complexity of monotonicity testing. From the work of Jha and Raskhodnikova [37], we know that the image diameter is a parameter suited for studying the complexity of Lipschitz testing. Chakrabarty et al. [16] generalize monotonicity and the Lipschitz properties and define a class of properties that they call the Bounded Derivative Properties (BDPs). For each BDP, the query complexity of ε-testing it over the hypergrid [n] d is Θ( d log n ε ). Parameterization could help us overcome the lower bounds on testing BDPs just as it did for monotonicity. One natural question is to come up with parameters that work for specific BDPs. A more general question is whether there exists a single parameter that can capture the testing complexity of all of the BDPs.
Unateness of real-valued functions over the hypergrid is another property whose study is relevant from the perspective of parameterized complexity. A function f : [n] d → R is unate if it is either nondecreasing or nonincreasing along each dimension. Baleshzar et al. [2] proved that the query complexity of ε-testing unateness of real-valued functions over the hypercube is Θ( d ε ). Chen et al. [22] gave an ε-tester for unateness of Boolean functions over the hypercube with query complexity O ( d 3/4 ε 2 ). It is open whether parametrization can help fine-tune the query complexity of unateness testing.
-Parameterization can help even in cases where "optimal" testers are known, provided that these testers are not instance-optimal. That is, in the traditional sense, a tester is said to be optimal if there is a family of instances on which it has optimal performance. An instanceoptimal tester, on the other hand, has optimal query complexity (optimal up to a fixed constant c) on each instance (see some discussion on instance-optimality in References [46,47]). Parameterization can help in the case of properties for which we currently do not have instance-optimal testers. -All of our testers rely on an upper bound on the image size of functions being given as part of the input. In classical parameterized algorithms, the parameter is usually not part of the input. Modifying our testers to work with the same guarantees without explicit access to the image size is an open question.