Querying a Matrix through Matrix-Vector Products

We consider algorithms with access to an unknown matrix $M\in\mathbb{F}^{n \times d}$ via matrix-vector products, namely, the algorithm chooses vectors $\mathbf{v}^1, \ldots, \mathbf{v}^q$, and observes $M\mathbf{v}^1,\ldots, M\mathbf{v}^q$. Here the $\mathbf{v}^i$ can be randomized as well as chosen adaptively as a function of $ M\mathbf{v}^1,\ldots,M\mathbf{v}^{i-1}$. Motivated by applications of sketching in distributed computation, linear algebra, and streaming models, as well as connections to areas such as communication complexity and property testing, we initiate the study of the number $q$ of queries needed to solve various fundamental problems. We study problems in three broad categories, including linear algebra, statistics problems, and graph problems. For example, we consider the number of queries required to approximate the rank, trace, maximum eigenvalue, and norms of a matrix $M$; to compute the AND/OR/Parity of each column or row of $M$, to decide whether there are identical columns or rows in $M$ or whether $M$ is symmetric, diagonal, or unitary; or to compute whether a graph defined by $M$ is connected or triangle-free. We also show separations for algorithms that are allowed to obtain matrix-vector products only by querying vectors on the right, versus algorithms that can query vectors on both the left and the right. We also show separations depending on the underlying field the matrix-vector product occurs in. For graph problems, we show separations depending on the form of the matrix (bipartite adjacency versus signed edge-vertex incidence matrix) to represent the graph. Surprisingly, this fundamental model does not appear to have been studied on its own, and we believe a thorough investigation of problems in this model would be beneficial to a number of different application areas.

symmetric, diagonal, or unitary; or to compute whether a graph defined by M is connected or triangle-free. We also show separations for algorithms that are allowed to obtain matrix-vector products only by querying vectors on the right, versus algorithms that can query vectors on both the left and the right. We also show separations depending on the underlying field the matrix-vector product occurs in. For graph problems, we show separations depending on the form of the matrix (bipartite adjacency versus signed edge-vertex incidence matrix) to represent the graph.
Surprisingly, this fundamental model does not appear to have been studied on its own, and we believe a thorough investigation of problems in this model would be beneficial to a number of different application areas.

Introduction
Suppose there is an unknown matrix M ∈ F n×d that you can only access via a sequence of matrix-vector products M·v 1 , . . . , M·v q , where we call the vectors v 1 , . . . , v q the query vectors, which can be chosen in a randomized, possibly adaptive way. By adaptive, we mean that v i can depend on v 1 , . . . , v i−1 as well as Mv 1 , . . . , Mv i−1 . Here F is a field, and we study different fields for different applications. Suppose our goal is to determine if M satisfies a specific property P, such as having approximately full rank, or for example whether M has two identical columns. A natural question is the following: Question 1: How many queries q are necessary to determine if M has property P?
A number of well-studied problems are special cases of this question, i.e., compressed sensing or sparse recovery, for which M ∈ R 1×d is an approximately k-sparse vector, and one would like a number q of queries close to k. It is known that if the query sequence is non-adaptive, meaning v 1 , . . . , v q are chosen before making any queries, then q = Θ(k log(n/k)) is necessary and sufficient [13,7] to recover an approximately k-sparse vector 1 . However, if the queries can be adaptive, then q = O(k log log n) queries suffice [17], while there is a lower bound of Ω(k + log log n) [31] (see also recent work [30,18]).
The above problem is representative of an emerging field called linear sketching which is the underlying technique behind a number of algorithmic advances the past two decades. In this model one queries M · v 1 , . . . , M · v r for non-adaptive queries v 1 , . . . , v r . For brevity we write this as M · V, where V ∈ F d×r has i-th column equal to v i . Linear sketching has played a central role in the development of streaming algorithms [3]. Perhaps more surprisingly, linear sketches are also known to achieve the minimal space necessary of any, possibly non-linear, algorithm for processing dynamic data streams under certain general conditions [25,2,20], which is an essential result for proving a number of lower bounds for approximating matchings in a stream [23,5]. Linear sketching has also led to the fastest known algorithms for problems in numerical linear algebra, such as least squares regression and low rank approximation; for a survey see [37]. Note that given M · V and M ′ · V, by linearity one can compute (M + M ′ ) · V = M · V + M ′ · V. This basic versatility property allows for fast updates in a data stream and mergeability in environments such as MapReduce and other distributed models of computation.
Given the applications above, we consider Question 1 an important question to understand for many different properties P of interest, which we describe in more detail below. A central goal of this work is to answer Question 1 for such properties and to propose this be a natural model of study in its own right.
One notable difference with our model and a number of appications of linear sketching is that we will allow for adaptive query sequences. In fact, our upper bounds will be non-adaptive, and our nearly matching lower bounds for each problem we consider will hold even for adaptive query sequences. Our model is also related to property testing, where one tries to infer properties of a large unknown object by (possibly adaptively) sampling a sublinear number of locations of that object. We argue that linear queries are a natural extension of sampling locations of an object, and that this is a natural "sampling model" not only because of the desired properties of the distributed, linear algebra, and streaming applications above, but sometimes also for physical constraints, e.g., in compressed sensing, where optical devices naturally capture linear measurements.
From a theoretical standpoint, any property testing algorithm, i.e., one that samples q entries of M, can be implemented in our model with q linear queries. However, our model gives the algorithm much more flexibility. From a lower bound perspective, as in the case of property testing [11], some of our lower bounds will be derived from communication complexity. However, not all of our bounds can be proved this way. For example, one notable result we show is an optimal lower bound on the number of queries needed to approximate the rank of M ∈ R n×n up to a factor t by randomized, possibly adaptive algorithms; we show that n t + 1 queries are necessary and sufficient. A natural alternative way to prove this would be to give part of the matrix to Alice, part of to Bob, and have the players exchange the M L v i and M R v i , where M = M L + M R and M L is Alice's part and M R is Bob's part. Then, if the 2-player randomized communication complexity of approximating the rank of M up to a factor of t were known to be Ω(n 2 /t), we would obtain a nearly-matching query lower bound of Ω(n/(t(b + log n))), where b is the number of bits needed to specify the entries of M and the queries. However, it is unknown what the 2-player communication complexity of approximating the rank of M up to a factor t is over R! We are not aware of any lower bound better than Ω(1) for constant t for this problem for adaptive queries. We note that for non-adaptive queries, there is an Ω(n 2 ) sketching lower bound over the reals given in [24], and an Ω(n 2 / log p) lower bound for finite fields (of size p) in [4]. There is also a property testing lower bound in [8], though such a lower bound makes additional assumptions on the input. Thus, our model gives a new lens to study this problem from, from which we are able to derive strong lower bounds for adaptive queries. Our techniques could be helpful for proving lower bounds in existing models, such as two-party communication complexity.
Our model is also related to linear decision tree complexity, see, e.g., [10,19], though such lower bounds typically involve just seeing a threshold applied to Mv i , and typically M is a vector. In our case, we observe the entire output vector Mv i .
An interesting twist in our model is that in our formulation above, we only allowed to query M via matrix-vector products on the right, i.e., of the form M · v i . One could ask if there are natural properties P of M for which the number q L of queries one would need to make if querying M via queries of the form (u 1 ) T M, (u 2 ) T M, . . . , (u q L ) T M can be significantly smaller than the number q R of queries one would need to make if querying M via queries of the form Mu 1 , Mu 2 , . . . , Mu q R : Question 2: Are there natural problems for which q L ≪ q R ?
We show that this is in fact the case, namely, if we can only multiply on the right, then it takes Ω(n/ log n) queries to determine if there is a column of a matrix M ∈ {0, 1} n×n which is all 1s. However, if we can multiply on the left, then the single query (1, 1, . . . , 1) can determine this.
We study a few problems around Question 2, which is motivated from several perspectives. First, matrices might be stored on computers in a specific encoding, e.g., a sparse row format, from which it may be much easier to multiply on the right than on the left. Also, in compressed sensing, it may be natural for physical reasons to obtain linear combinations of columns rather than rows.
Another important question is how the query complexity depends on the underlying field for which matrix-vector products are performed. Might it be that for a natural problem the query complexity if the matrix-vector products are performed modulo 2 is much higher than if the matrix-vector products are performed over the reals? Question 3: Is there a natural problem for which the query complexity in our model over F [2] is much larger than that over the reals?
Yet another important application of this model is to querying graphs. A natural question is which representation to use for the graph. For example, a natural representation of a graph on n vertices is through its adjacency matrix A ∈ {0, 1} n×n , where A i,j = 1 if and only if {i, j} occurs as an edge. A natural representation for a bipartite graph with n vertices in each part could be an n × n matrix A where A i,j = 1 iff there is an edge from the i-th left vertex to the j-th right vertex. Yet another representation could be the n 2 × n edge-vertex incidence matrix, where the {i, j}-th row is either 0, or has exactly two ones, one in location i and one in location j. One often considers a signed edge-vertex incidence matrix, where one first arbitrarily fixes an ordering on the vertices and then the {i, j}-th entry has a 1 in the i-th position and a −1 in the j-th position if i > j, otherwise positions i and j are swapped. Yet another possible representation of a graph is through its Laplacian.
Question 4: Do some natural representations of graphs admit much more efficient query algorithms for certain problems than other natural representations?
We note that in the data stream model, where one sees a long sequence of insertions and deletions to the edges of a graph, each of the matrix representations above can be simulated and so they lead to the same complexity. We will show, perhaps surprisingly, that in this model there can be an exponential difference in the query complexity for two different natural representations of a graph for the same problem. We next get into the details of our results. We would like to stress that even basic problems in this model are not immediately obvious how to tackle. As a puzzle for the reader, what is the query complexity of determining if a matrix M ∈ F n×n is symmetric if one can only query vectors on the right? We will answer this later in the paper.

Formal Model and Our Results
We now describe our model and results formally in terms of an oracle. The oracle has a matrix M ∈ F m×n , for some underlying field F that we specify in each application. We can only query this matrix via matrix-vector products, i.e., we pick an arbitrary vector x and send it to the oracle, and the oracle will respond with a vector y = M · x. We focus our attention when the queries only occur on the right. Our goal is to approximate or test a number of properties of M with a minimal number of queries, i.e., to answer Question 1 for a large number of different application areas.
We study a number of problems as summarized in the Table 1. We assume M is an m × n matrix and ε > 0 is a parameter of the problem. The bounds hold for constant probability algorithms. In some problems, such as testing whether the matrix is a diagonal matrix, we always assume m = n, and in the graph testing problems we explicitly describe how the graph is represented using M. Interestingly, we are able to prove very strong lower bounds for approximating the rank, which as described above, are unknown to hold for randomized communication complexity.
Motivated be streaming and statistics questions, we next study the query complexity of approximating the norm of each row of M. We also study the computation of the majority or parity of each column or row of M, the AND/OR of each column or row of M, or equivalently, whether M has an all ones column or row, whether M has two identical columns or rows, and whether M contains an unusually large-normed row, i.e., a "heavy hitter". Here we show there are natural problems, such as computing the parity of all columns, which can be solved with 1 query if sketching on the left, but require Ω (n) queries if sketching on the right, thus answering Question 2. We also answer Question 3, observing for the natural problem of testing if a row is all ones, a single deterministic query suffices over the reals but over For graph problems, we first argue if the graph is presented as an n × n bipartite adjacency matrix M, then it requires Ω(n/ log n) possibly adaptive queries to determine if the graph is connected. In contrast, if the graph is presented as an n × n 2 signed vertex-edge incidence matrix, then polylog (n) non-adaptive queries suffices. This answers Question 4, showing that the type of representation of the graph is critical in this model. Motivated by a large body of recent work on triangle counting (see, e.g., [14] and the references therein), we also give strong negative results for this problem in our model, which as with all of our lower bounds unless explicitly stated otherwise, hold even for algorithms which perform adaptive queries.

Preliminaries
We use capital bold letters, e.g., A, B, M, to denote matrices, and use lowercase bold letters, e.g., x, y, to denote column vectors. Sometimes we write a matrix as a list of column vectors in square brackets, e.g., M = [m 1 , . . . , m n ]. We use calligraphic letters, e.g., D, to denote probability distributions, and use M ← D to denote that M is sampled from distribution D. In particular, we use G to denote a Gaussian distribution and G for a matrix whose entries are sampled from an independently and identically distributed (denoted as i.i.d. in the following) Gaussian distribution.
We call a matrix M i.i.d. Gaussian if each element is i.i.d. Gaussian. It is easy to check that if matrix G is a p × n i.i.d. Gaussian matrix, and R is an n × n rotation matrix, then G × R is still i.i.d. Gaussian, and has the same probability distribution of G.
The total variation distance, sometimes called the statistical distance, between two probability measures P and Q is defined as Let X be an n × m matrix with each row i.i.d. drawn from an m-variate normal distribution N(0, Σ). Then the distribution of the m × m random matrix A = X T X is called the Wishart distribution with n degrees of freedom and covariance matrix Σ, denoted by W m (n, Σ). The distribution of eigenvalues of A is characterized in the following lemma.

Linear Algebra Problems
In this section we present our lower bound for rank approximation in Section 3.1. In the following, we provide our results about trace estimation in Section 3.2, testing symmetric matrices in Section 3.3, testing diagonal matrices in Section 3.4, testing unitary matrices in Section 3.5, and approximating the maximum eigenvalue in Section 3.6.

Lower Bound for Rank Approximation
In this section, we discuss how to approximate the rank of a given matrix M over the reals when the queries consist of right multiplication by vectors. A naïve algorithm to learn the rank is to pick random Gaussian query vectors non-adaptively. In order to approximate the rank, that is, to distinguish whether rank (M) ≤ p or rank (M) ≥ p + 1, this algorithm needs at least p + 1 queries, and it is not hard to see that the algorithm succeeds with probability 1. Indeed, if H ∈ R n×(p+1) is the random Gaussian query matrix, and M the unknown n × n matrix, then we can write M in its thin singular value decomposition as M = UΣV T , where U and V ∈ R n×k have orthonormal columns, and Σ ∈ R k×k has positive diagonal entries. Here k =rank(M). We have that rank(M · H) = rank(V T H), which by rotational invariance of the Gaussian distribution is the the same as the rank of a random Gaussian matrix, which will be the minimum of p+1 and the rank of M with probability 1.
In the following, we will show that we cannot expect anything better. We will first show for non-adaptive queries, at least p + 1 queries are necessary to learn the approximate rank. Then we generalize our results to adaptive queries. Our results hold for randomized algorithms by applying Yao's minimax principle.

Non-Adaptive Query Protocols
Theorem 1. Let constant ε > 0 be the error tolerance and let M be an n × n oracle matrix and suppose to start that we make non-adaptive queries. For integer p < p ′ ≤ n, at least p + 1 queries are necessary to distinguish rank (M) ≤ p from rank (M) ≥ p ′ with advantage ≥ ε.
Proof. Given any algorithm distinguishing rank (M) ≤ p from rank (M) ≥ p ′ for some p ′ < n, we can determine whether a p ′ × p ′ matrix M ′ has full rank p ′ or rank (M ′ ) ≤ p, by padding M ′ to an n × n matrix M. Therefore in what follows it suffices to prove the lower bound for two n × n matrices M 1 and M 2 where rank (M 1 ) ≤ p and rank (M 2 ) = n: Here U has p columns and U ⊥ has (n − p) columns such that U, U ⊥ forms an n×n random orthonormal basis, G T and H T are p×n and (n−p)×n matrices whose entries are sampled i.i.d. from the standard Gaussian distribution, and Z(n) is a function in n which will be specified later. It immediately follows that rank (M 1 ) ≤ p and rank (M 2 ) = n with overwhelmingly high probability. Then we assume rank (M 2 ) = n and discuss the query lower bound for distinguishing M 1 from M 2 . Given M ∈ {M 1 , M 2 }, without loss of generality we denote the q nonadaptive queries with an n × q . Then, it suffices to show that the following two distributions are hard to distinguish: Note that U, U ⊥ is orthonormal, and hence U T U = I p , U ⊥ T U ⊥ = I n−p , U T U ⊥ = 0 p×(n−p) . We introduce Lemma 2 to eliminate U, U ⊥ in the representation of M × V. Lemma 2. For M 1 , M 2 and V defined as above, is trivial by the data processing inequality (i.e., for every X, Y and function In what follows we only prove the other direction.
First we notice that for every fixed n × n orthonormal matrix R and for a random matrix M sampled as M 1 or M 2 , the product N def = RM follows exactly the same distribution of M. Thus NV and MV are identically distributed.
Then, from a random sample V T M T MV we can find M ′ such that V T M T MV = (M ′ ) T M ′ and M ′ = SMV for some orthonormal matrix S and orthonormal query matrix V. Although M ′ is not necessarily the same as MV because of S, we have RM ′ ∼ NV ∼ MV for a uniformly random orthonormal matrix R. Thus we transform a random sample from V T M T MV into a sample from MV via RM ′ , and hence, we have Using Lemma 2, it suffices to prove an upper bound for D TV (Λ, Λ ′ ) as follows: = B T Λ ′ B for orthonormal matrices A and B. The inequality follows because any algorithm separating implies a separation of Λ from Λ ′ with the same advantage, by multiplying by random orthonormal matrices.
By Weyl's inequality [36,39], . Notice that W ′ is an (n − p) × q i.i.d. Gaussian matrix, and hence W ′ 2 2 is a chi-squared variable with (n−p)q degrees of freedom, which is bounded by O ((n − p)q) with high probability (c.f. Example 2.12 in [35]). Recalling that q ≤ p, in what follows we condition on the event λ . We then show the gaps between eigenvalues λ i are sufficiently large. Note that since G T is i.i.d. Gaussian and V is an orthonormal matrix, each row in W = G T V is independently drawn from an q-variate normal distribution, thus the probability distribution of W T W is a Wishart distribution W q (p, I q ). Let q = p and λ 1 , . . . , λ p be sorted in descending order. Then by Lemma 1 the density function of Λ is: Let E denote the event that λ p ≥ 0.01 √ n and ∀1 ≤ i < j ≤ p, Lemma 3. For W T W defined as above and sufficiently small γ = 2 −Θ(p 2 log n) , Pr[E] > 0.9.
To lower bound Pr[E], we need to upper bound the probability of E i for 1 ≤ i ≤ p − 1.
Let f be the density function of µ as in (1), and let Leb(·) be the Lebesgue measure in n dimensions. Then for every i, Note that conditioning on E 0 such that λ p ≥ 0.01/ √ n, the density function f is bounded as: As a result, we get Pr Therefore, the probability of E is lower bounded for sufficiently small γ = 2 −Θ(p 2 log n) , Conditioned on event E and recalling that λ , the probability density of Λ ′ has only a negligible difference from that of Λ, since the small disturbance of eigenvalues is dominated by the corresponding terms in f (Λ).
Similarly we can prove f (Λ ′ )/f (Λ) ≥ 1 − O (np 3 γ −1 /Z 2 (n)). Thus the total variation distance between Λ and Λ ′ conditioned on E is D TV Λ, Λ ′ E ≤ O (np 3 γ −1 /Z 2 (n)) = O (1/n 2 ) for sufficiently large Z(n) ≥ (np) 1.5 γ −0.5 = 2 Θ(p 2 log n) . Thus, for sufficiently large n, we have: Therefore, with as many as q = p non-adaptive queries to the oracle matrix M, the two distributions M 1 and M 2 cannot be distinguished with advantage greater than 0.11. At least p+1 queries are necessary to distinguish those two matrices M 1 and M 2 of rank ≤ p and rank n, respectively. Indeed, the above argument holds for every constant advantage ε if y = ε/3, t > 12n/ε, and γ is sufficiently small in the proof of Lemma 3, and letting Z(n) be sufficiently large.

Equivalence Between Adaptive and Non-Adaptive Protocols
Now, we consider the adaptive query matrix V = [v 1 , . . . , v q ] where v i is the i-th query vector. Without loss of generality, we can assume that ∀i, v i is a unit vector and it is orthogonal to query vectors v 1 , · · · , v i−1 . This gives us the following formal definition of an adaptive query protocol. Definition 1. For a target matrix M, an adaptive query protocol P will output a sequence of query vectors v 1 , v 2 , · · · . It is called a normalized adaptive protocol if for any i, the query vector v i output by P satisfies Let P std be a standard protocol which outputs e 1 , e 2 , · · · where e i is the i-th standard basis vector. We then show that adaptivity is unnecessary by proving that P std has the same power as any normalized adaptive protocol to distinguish the matrix M 1 and M 2 defined in the previous section.
More formally, we show the following lemma for matrix M 2 : Lemma 4. Fix any n × n orthogonal matrix U, U ⊥ and any normalized adaptive protocol P . Consider M 2 = U × G T + 1 poly(n) · U ⊥ × H T where G T be a p × n i.i.d. Gaussian matrix, and H T be a (n − p) × n i.i.d. Gaussian matrix. Let matrix V = [v 1 , . . . , v q ] and V std = [e 1 , · · · , e q ] be the query matrix output by protocol P and P std , correspondingly. We have the matrix M 2 V has the same distribution as M 2 V std .
Proof. Since the matrix G T · V std and H T · V std is i.i.d. Gaussian, that is, every element in two matrices is from standard Gaussian distribution and independent of each other, it is enough to show both G T · V and H T · V are i.i.d. Gaussian. In the following, we will show G T · V are i.i.d. Gaussian and independent of H T · V, and the similar argument also holds for H T · V. Let Note that v 1 , . . . , v q are unit vectors and orthogonal to each other. We first define orthogonal rotation matrices R 1 , R 2 , · · · recursively as follows. The matrix R 1 will take v 1 to e 1 . The matrix R i will take e j to e j for any j < i and takes R i−1 · · · R 1 v i to e i . Note, R i only depends on the first i query vectors. We have R i · · · R 1 V i = V std i for any i ≤ q, and G T V = G T · R −1 1 · · · R −1 q · V std . Define matrix GR i G T · R −1 1 · · · R −1 i . In the following, we use induction to show for any i ≤ q, GR i is i.i.d. Gaussian and independent of H T · V. It is enough to show that for any fixed H, for any i ≤ q, GR i is i.i.d. Gaussian.
For i = 1, since R 1 is determined by v 1 which is independent of G T and R 1 is an orthogonal matrix, i+1 is also i.i.d. Gaussian. On one hand, R i+1 is determined by v 1 , · · · , v i+1 which are determined by the response of the first i queries, that is, determined by M 2 V i . We have On the other hand, R i+1 e j = e j for any j ≤ i, and thus R −1 i+1 = Note the matrix R ′ is actually determined by the protocol, U, U ⊥ , H and also the first i columns of GR i , but it is independent of the last n − i columns of GR i . Consequently, in the multiplication of GR i × R −1 i+1 , the first i columns are the same as those in GR i . For the other n − i columns, the a-th element of j-th column is b≥i+1 gr ab r ′ b,j where gr ab , r ′ b,j are the elements in GR i , R ′ correspondingly. Since r ′ b,j is independent of the last n − i columns of GR i , it is independent of gr ab when b ≥ i + 1. Since GR i is i.i.d Gaussian and R ′ is an orthogonal matrix, the last n − i columns of GR i+1 is also i.i.d. Gaussian and independent of the first i columns. Therefore, we show Gaussian, and independent of H T V. This finishes our proof.
Obliviously, the same argument also holds for M 1 . Combining these results and Theorem 1, together with Yao's minimax principle [38], Theorem 2. Let constant ε > 0 be the error tolerance and let M be an n × n oracle matrix with adaptive queries. For integer p < p ′ ≤ n, at least p + 1 queries are necessary for any randomized algorithm to distinguish whether rank (M) ≤ p or rank (M) ≥ p ′ with advantage ≥ ε.

Lower Bound for Trace Estimation
In this section, we lower bound the number of queries needed to approximate the trace tr (M) of a matrix M. In particular we reduce this problem to triangle detection as will be proved in Theorem 8. For the trace estimation problem, Avron and Toledo [6] analyzed the convergence of randomized trace estimators via a similar matrix vector products framework. In their model, for an unknown matrix M, they can access it via v T Mv; while in our model, we only consider the right multiplication of the form Mv. Proof. Suppose we had a possibly adaptive query algorithm making q(n) queries which for a symmetric matrix M, could approximate tr (M) up to any relative error. If M = A 3 for a symmetric matrix A, we can run the trace esimation algorithm on M as follows: if x 1 is the first query, we compute Ax 1 , then A(Ax 1 ), then A(A(Ax 1 )) = A 3 x 1 . This then determines the second query x 2 , and we similarly compute Ax 2 , then A(Ax 2 ), then A(A(Ax 2 )) = A 3 x 2 , etc. Thus, given only query access to A, we can simulate the algorithm on M = A 3 with 3q(n) adaptive queries. Now, it is well known that for an undirected graph G with adjacency matrix A, the trace tr (A 3 ) /6 is the number of triangles in G. By the argument above, it follows that with 3q(n) queries to A, we can determine if G has a triangle or has no triangles. On the other hand, by Theorem 8 below, at least Ω (n/ log n) queries to A are necessary for any adaptive algorithm to decide if there is a triangle in G. Therefore 3q(n) = Ω (n/ log n) and hence we complete the proof with q(n) = Ω (n/ log n).

Deciding if M is a Symmetric Matrix
Theorem 4. Given an n × n matrix M over any finite field or over fields R or C, O(log( 1 ε )) queries are enough to test whether M is symmetric or not with probability 1 − ε.
Proof. We choose two random vectors u and v, where over a finite field we choose from a uniform distribution and over fields R or C we choose the Gaussian distribution. We then compute Mu and Mv. We declare M to be symmetric if and only if u T · Mv = v T · Mu. It is easy to check that if M is symmetric, the test will succeed. We then show if M is not symmetric, u T Mv = v T Mu with constant probability, so we obtain success probability 1 − ε by repeating the test O(log( 1 ε )) times. Let A = M−M T . When M is not symmetric, A is not 0. Thus, u T Mv = v T Mu means u T Av = 0. We can treat this as a degree-2 polynomial in the entries of v T and u, i.e., this is i, Thus, this is a non-zero polynomial and has at most constant probability of evaluating to 0 for any underlying field. To see this, for each i, let t i = j v j A i,j . Then there will be at least one t i which is non-zero with probability at least 1/2, for any underlying field. So now we get i u i t i . Fix all the u i except u i for a given t i that is non-zero. Then we obtain S + u i t i . Then if u i has at least two possible values, this is 0 in one case and non-zero in the other case. So we obtain a probability of at least 1/4 of detection overall.

Deciding if M is a Diagonal Matrix
Given an n × n matrix M, we show that Ω log 1 ε queries are sufficient to test whether M is a diagonal matrix with error ≤ ε.
The first query is an all ones vector which retrieves the sum of each row. Then we take Ω log 1 ε random queries where each entry is uniformly sampled from {0, 1}. Every row containing non-zero entries off the diagonal can be detected with probability 1/2 under such a random query, which implies bounded error ≤ ε after Ω log 1 ε random queries. Furthermore, this algorithm works over any field.

Deciding if M is a Unitary Matrix
Given an n×n complex matrix M, we show 1 query is enough to test whether M is unitary or not, that is M * M = MM * = I.
We choose a random Gaussian vector v, and compute Mv. We declare M to be unitary if and only if |Mv| 2 = |v| 2 . It is easy to check that if M is unitary, the test will succeed. We then show if M is not unitary, |Mv| 2 = |v| 2 with probability 1. Let the singular value decomposition of M be M = UΣV T . We have |Mv| 2 2 = |Σu| 2 2 , where u = V T v is a random Gaussian vector with |u| 2 2 = |v| 2 2 . The diagonal values in Σ are not all 1 since M is not unitary. Consider i σ 2 i u 2 i , where σ i = Σ i,i . We want this to equal |v| 2 2 = |u| 2 2 = i u 2 i , so this is i u 2 i (σ 2 i − 1) = 0. This is a non-zero polynomial and has probability 0 of evaluating to 0 since the u 2 i are drawn from a continuous distribution.

Approximating the Maximum Eigenvalue
The upper bound is due to [29]. Given a matrix M ∈ R m×n , we can εapproximate the maximum eigenvalue of M by taking a random vector v ∈ R n and computing M r v for r = O (ε −0.5 log n). This requires r adaptive oracle queries to M. See [29] for details. See [33] for a matching lower bound for adaptive queries. A non-adaptive Ω (n) lower bound is given in [27].

Streaming and Statistics Problems
In this section we discuss the following streaming and statistics problems: testing an all ones column/row and identical columns/rows; approximating row norms or finding heavy hitters; and computing the majority or parity of columns/rows.

Testing Existence of an All Ones Column/Row
Given a matrix M ∈ {0, 1} m×n , we want to test if M has a column (or row) with all 1 entries. It is trivial to test whether M has an all 1 column (or row) using n queries, e.g. e 1 , . . . , e n . We consider this problem both over F [2] and R. Note in the case over R, if we allow an arbitrary query vector, we can set one query v = {1, 2, 4, 8, ...2 n }, and then reconstruct M exactly. Thus, in order to avoid such trivial cases, we also restrict the entries in the query to be in {0, 1, 2, . . . , n C }.
For testing the existence of an all ones column, we reduce the problem to the communication complexity of Disjointness. Disjointness requires Ω(n) bits of communication to decide whether two sets with characteristic vectors x, y ∈ {0, 1} n are disjoint with constant probability, where the randomness is taken only over the coin tosses of the protocol (not over the inputs). Suppose the fist m − 1 rows in M each equal x T while the last row equals y T . If we can decide whether M has an all ones column with q nonadaptive queries v 1 , . . . , v q , then we obtain a protocol for Disjointness with communication q by letting Alice send a message x T v 1 , . . . , x T v q . Thus from the communication complexity lower bound of Disjointness, q = Ω(n) queries over F [2] are necessary to test if there is an all ones column in M, which shows that the naïve algorithm is already optimal. For queries over R, note that each entry x T v j in the message is represented with log n bits, and as a result q ≥ Ω (n/log n).
Testing the existence of an all ones row with queries over R is trivial deterministically by querying v = (1, 1, . . . , 1). Next we study the query complexity of testing an all 1s row deterministically with queries over F [2]. With any q ≤ n − 1 queries V = [v 1 , . . . , v q ], there is a non-zero vector z = 0 such that z T V = 0. Therefore the query matrix V cannot distinguish whether a row is from x T or x T + z T . However, x T and x T + z T cannot be both all 1 rows, and hence n queries are necessary. This result also shows that the query complexity of the same problem over different fields might be quite different. We note for randomized algorithms, O(log(1/ǫ)) queries suffice over F [2] since the inner product of a row which is not all 1s disagrees with the parity of the query with probability 1/2.
Evaluating the OR/AND function of columns/rows of a Boolean matrix can be reduced to testing existence of an all 1 or all 0 column/row, and hence the same bounds follow.

Identical Columns/Rows
Given an m × n matrix M, we want to test whether M has two identical columns or rows. The trivial solution naively retrieves all information with n queries (column vectors). In this section, we consider the query complexity over F [2].
Testing identical columns can be reduced to Disjointness. Suppose Alice and Bob have x, y ∈ {0, 1} n . Let Alice expand her vector x to an Then M is an m × (n + 1) matrix with the first column being all 1s. For j ≥ 2, the j-th column is all 1s if and only if x j = y j = 1, in which case M has two identical rows of all 1 entries. For columns where x j , y j are not both equal to 1, without loss of generality we may assume the j-th and j ′ -th columns satisfy x j = x j ′ = 0 and y j = y j ′ . Then two columns are identical only if (z j ′ ), which happens with probability ≤ 1/2 m 2 −1 . Therefore the overall probability of two not-all-ones columns in M being identical is bounded by n 2 /2 m/2 . Thus the error probability is less than ǫ if m ≥ 4 log(n/ǫ).
That is, except for a small error ǫ, two identical columns in M are both all ones columns, which turns out to be equivalent to the case that two vectors x, y held by Alice and Bob are not disjoint. Then, because Disjointness requires Ω(n) bits of communication, and after one oracle queries, Alice or Bob can communicate at most m bits, at least Ω(n/m) oracle queries to M are necessary.
In the other hand, to test identical rows with error ε, if suffices to make q = O (log (m/ε)) random queries with each entry uniform random over {0, 1}. Since for every pair of distinct rows, a random query distinguishes them with probability 1 2 , with ⌈log (m 2 /ε)⌉ queries each pair of distinct rows is miscounted as identical with probability ≤ ε/m 2 . By a union bound, the overall false-positive error is bounded by ε m 2 · m 2 < ε, while there is no falsenegative error since for all queries, identical rows always lead to identical outputs.

Approximating Row Norms and Finding Heavy Hitters
To approximate the norms of each row in a matrix M ∈ R m×n , we recall the Johnson-Lindenstrauss lemma which guarantees that norms are roughly preserved when embedded to a lower dimensional space. Thus, with q = O (ε −2 log m) and an n×q random query matrix V, the output M·V preserves the row norms of M up to a (1 ± ε)-factor. The above algorithm also gives a natural upper bound for finding heavy hitters in the matrix M, which requires finding all rows M i with norm |M i | 2 2 ≥ 1 10 |M| 2 F and not outputting any row M i with |M i | 2 2 ≤ 1 20 |M| 2 F (rows with norm in between the two quantities can be classified arbitrarily). Again we use the Johnson-Lindenstrauss lemma to approximate all row norms and decide which row is a heavy hitter.

Majority
Given a matrix M ∈ {0, 1} m×n , we want to compute the majority of rows or columns in M.
The majority of each row in M is trivial with an all 1 query and addition over R.
For the majority of columns in M, we use a similar matrix M as that reduced from Disjointness in Section 4.2 to obtain a lower bound. More specifically, we consider x, y whose intersection is at most 1. Let M be obtained from x, y such that the first m/2 rows are identical to x and the remaining rows are identical to y. Thus, if M has a column with majority 1, then the column must be all 1s and we can conclude that x and y are not disjoint. As a result, Ω(n/ log n) queries are necessary to compute the majority of columns in M.

Parity
For parity we consider a matrix M ∈ {0, 1} m×n with only queries over F [2]. Computing the parity of rows in M is trivial by using a vector (1, 1, . . . , 1). However, to compute the parity of all columns of M, we claim at least n queries are necessary.
To see this, let V be any n × q query matrix. Note that the parity of columns of M remains the same if we sum up all the rows, i.e., M ′ def = P · M has the same parity on each column as M, where P is defined to be Thus M ′ V = PMV is a 1×q row vector followed by m−1 zero rows, since M ′ , as well as P, is non-zero only in the first row. Then we must have q = Ω(n) to obtain the output of n parity instances from M ′ V. Indeed, if we were to place the uniform distribution on M then its columns define n uniform parity bits, and for any fixed V, we only obtain q bits of information, which is a contradiction to Yao's minimax principle (since there must be a fixed V which succeeds with at least 2/3 probability on this distribution). This is a typical example illustrating the difference between left-and right-queries.

Graph Problems
In this section, we provide our results related to graph problems: testing graph connectivity in Section 5.1 and triangle detection in Section 5.2.

Connectivity
Theorem 5. Given the bipartite adjacency matrix A ∈ {0, 1} n×n of a graph, we need Ω (n/ log n) queries to decide whether the graph is connected with constant probability.
Proof. Consider two row vectors u, v ∈ {0, 1} n−1 and construct matrix A as follows. The first n/2 rows of A equal u and the rest are equal to v. Also, add an all 1s column to A. Now, matrix A can be treated as a bipartite adjacency matrix of a graph G with n vertices in each part, where A i,j = 1 iff there is an edge from the i-th left vertex to the j-th right vertex. Since all left vertices connect with the n-th right vertex, the graph G is disconnected if and only if there exists some right vertex which does not connect with any left vertices, that is, the corresponding column of matrix A is an all 0s column. In another word, G is disconnected if and only if the two vectors u and v are 0 on the same position. Thus any algorithm that uses q(n) non-adaptive queries on the right of A to decide the connectivity of G immediately implies a protocol for set disjointness, provided we replace 1s with 0s in the input characteristic vectors to the set disjointness problem. So the communication is at most q(n) log n, thus q(n) = Ω (n/ log n). Theorem 6. Given the signed edge-vertex incidence matrix M ∈ {0, ±1} n×( n 2 ) of a graph G with n vertices, the connectivity of G can be decided with polylog (n) non-adaptive queries.
This follows from the main theorem of [21] (also proved in the work [1]). By the following theorem, every cut of G is multiplicatively approximated and hence G is connected iff H is connected, since a graph is disconnected iff it has a zero cut.

Theorem 7 ([21]
). There is a distribution on n 2 × polylog (n) matrices S such that from MS, one can construct a (1 ± 0.1)-sparsifier H of the graph G with constant probability. Here, x T L G x = (1 ± 0.1)x T L H x for all x, with constant probability, where L G and L H are the corresponding graph Laplacians.
By the above, every cut of G is multiplicatively approximated and hence G is connected iff H is connected, since a graph is disconnected iff it has a zero cut.

Triangle Detection
Theorem 8. If an n × n matrix A is the adjacency matrix of a graph G, then determining whether G contains a triangle or not requires Ω (n/ log n) queries, even for randomized algorithms succeeding with constant probability.
Proof. To obtain a lower bound on q(n), we use a 2-player communication lower bound of counting the number of triangles in a graph G, where the edges are distributed across the two players, Alice and Bob. Namely, it is known [9,15,16] that if Alice has a subset of the edges of G, and Bob has the remaining (disjoint) subset of edges of G, then the multiround randomized communication complexity of deciding if there is a triangle in G is Ω(n 2 ). Alice can view her subset of edges as an adjacency matrix A ′ , and Bob can view his subset of edges as an adjacency matrix A ′′ , so that A = A ′ +A ′′ . To execute the query algorithm on A, Alice sends A ′ x 1 to Bob, who computes A ′′ x 1 followed by A ′ x 1 + A ′′ x 1 = Ax 1 , and sends the result back to Alice. Alice then possibly adaptively chooses x 2 , which is also known to Bob who knows x 1 and Ax 1 , and sends Bob A ′ x 2 , from which Bob can compute A ′′ x 2 and Ax 2 = A ′ x 2 + A ′′ x 2 . This process repeats until the entire q(n) queries have been executed, at which point Bob, by the success guarantee of the algorithm, can decide if G contains a triangle with say, probability at least 2/3. Because of the bounds on the bit complexity of the queries while the total communication is O (q(n)n log n), which must be Ω(n 2 ), and consequently q(n) = Ω(n/ log n), as desired.
well as the representation of the graph for graph problems. Given connections to sketching algorithms, streaming, and compressed sensing, we believe this area deserves its own study. Some interesting open questions are for computing matrix norms, such as Schatten-p norms, for which tight bounds in streaming and communication complexity models remain elusive; for recent work on this see [26,28,12]. Given the success of our model in proving lower bounds for approximate rank, which we also do not have streaming or communication lower bounds for, perhaps tight bounds in our query model are possible for matrix norms. Such bounds may give insight for other models.