Deterministic Identity Testing for Sum of Read-Once Oblivious Arithmetic Branching Programs

A read-once oblivious arithmetic branching program (ROABP) is an arithmetic branching program (ABP) where each variable occurs in at most one layer. We give the first polynomial-time whitebox identity test for a polynomial computed by a sum of constantly many ROABPs. We also give a corresponding blackbox algorithm with quasi-polynomial-time complexity $${n^{O({\rm log}\,n)}}$$ n O ( log n ) . In both the cases, our time complexity is double exponential in the number of ROABPs. ROABPs are a generalization of set-multilinear depth-3 circuits. The prior results for the sum of constantly many set-multilinear depth-3 circuits were only slightly better than brute force, i.e., exponential time. Our techniques are a new interplay of three concepts for ROABP: low evaluation dimension, basis isolating weight assignment and low-support rank concentration. We relate basis isolation to rank concentration and extend it to a sum of two ROABPs using evaluation dimension.


Introduction
Polynomial Identity Testing (PIT) is the problem of testing whether a given n-variate polynomial is identically zero or not. The input to the PIT problem may be in the form of arithmetic circuits or arithmetic branching programs (ABP). They are the arithmetic analogues of boolean circuits and boolean branching programs, respectively. It is well known that PIT can be solved in randomized polynomial time, see e.g. [29]. The randomized algorithm just evaluates the polynomial at random points; thus, it is a blackbox algorithm. In contrast, an algorithm is a whitebox algorithm if it looks inside the given circuit or branching program. We consider both, whitebox and blackbox algorithms.
Our algorithm uses the fact that the evaluation dimension of an ROABP is equal to the width of the ROABP [21,11]. Namely, we consider a set of linear dependencies derived from partial evaluations of the ROABPs 1 . We view identity testing of the sum of two ROABPs as testing the equivalence of two ROABPs. Our idea is inspired from a similar result in the boolean case. Testing the equivalence of two ordered boolean branching programs (OBDD) is in polynomial time [24]. OBDDs too have a similar property of small evaluation dimension, except that the notion of linear dependence becomes equality in the boolean setting. Our equivalence test, for two ROABPs A and B, takes linear dependencies among partial evaluations of A and verifies them for the corresponding partial evaluations of B. As B is an ROABP, the verification of these dependencies reduces to identity testing for a single ROABP.
In Section 3.2, we generalize this test to the sum of c ROABPs. There we take A as one ROABP and B as the sum of the remaining c − 1 ROABPs. In this case, the verification of the dependencies for B becomes the question of identity testing of a sum of c − 1 ROABPs, which we solve recursively.
The same idea can be applied to decide the equivalence of an OBDD with the XOR of c − 1 OBDDs. We skip these details here as we are mainly interested in the arithmetic case.
In Section 4, we give an identity test for a sum of ROABPs in the blackbox setting. That is, we are given blackbox access to a sum of ROABPs and not to the individual ROABPs. Our main result here is as follows (Theorem 4.9): There is a blackbox PIT for the sum of constantly many ROABPs that works in quasi-polynomial time.
The exact time bound we get for the PIT-algorithm is (ndw) O(c 2 c log(ndw)) , where n is the number of variables, d is the degree bound of the variables, c is the number of ROABPs and w is their width. Hence our time bound is double exponential in c, and quasi-polynomial in n, d, w.
Here again, using the low evaluation dimension property, the question is reduced to identity testing for a single ROABP. But, just a hitting-set for ROABP does not suffice here, we need an efficient shift of the variables which gives low-support concentration in any polynomial computed by an ROABP. An -concentration in a polynomial P (x) means that all of its coefficients are in the linear span of its coefficients corresponding to monomials with support < . Essentially we show that a shift, which achieves low-support concentration for an ROABP of width w 2 c , also works for a sum of c ROABPs (Lemma 4.8). This is surprising, because as mentioned above, a sum of c ROABPs is not captured by an ROABP with polynomially bounded width [20].
A novel part of our proof is the idea that for a polynomial over a k-dimensional Falgebra A k , a shift by a basis isolating weight assignment achieves low-support concentration. To elaborate, let w : x → N be a basis isolating weight assignment for a polynomial P (x) ∈ A k [x] then P (x + t w ) has O(log k)-concentration over F(t). As Agrawal et al. [3] gave a basis isolating weight assignment for ROABPs, we can use it to get low-support concentration. Forbes et al. [9] had also achieved low-support concentration in ROABPs, but with a higher cost. Our concentration proof significantly differs from the older rank concentration proofs [4,9], which always assume distinct weights for all the monomials or coefficients. Here, we only require that the weight of a coefficient is greater than the weight of the basis coefficients that it depends on.

Notation
Let x = (x 1 , x 2 , . . . , x n ) be a tuple of n variables. For any a = (a 1 , a 2 , . . . , a n ) ∈ N n , we denote by x a the monomial n i=1 x ai i . The support size of a monomial x a is given by supp(a) = |{a i = 0 | i ∈ [n]}|.
Let By coeff A (x a ) ∈ F we denote the coefficient of the monomial x a in A(x). Hence, we can write The sparsity of polynomial A(x) is the number of nonzero coefficients coeff A (x a ).
We also consider matrix polynomials where the coefficients coeff A (x a ) are w×w matrices, for some w. In an abstract setting, these are polynomials over a w 2 -dimensional F-algebra A.
Recall that an F-algebra is a vector space over F with a multiplication which is bilinear and associative, i.e. A is a ring. The coefficient space is then defined as the span of all coefficients of A, i.e., span Consider a partition of the variables x into two parts y and z, with |y| = k. A polynomial A(x) can be viewed as a polynomial in variables y, where the coefficients are polynomials in F[z]. For monomial y a , let us denote the coefficient of Thus, A(x) can be written as The coefficient A (y,a) is also sometimes expressed in the literature as a partial derivative ∂A ∂y a evaluated at y = 0 (and multiplied by an appropriate constant), see [11,Section 6]. For a set of polynomials P, we define their F-span as The set of polynomials P is said to be F-linearly independent if A∈P α A A = 0 holds only for α A = 0, for all A ∈ P. The dimension dim F P of P is the cardinality of the largest F-linearly independent subset of P. For a matrix R, we denote by R(i, ·) and R(·, i) the i-th row and the i-th column of R, respectively. For any a ∈ F k×k , b ∈ F × , the tensor product of a and b is denoted by a ⊗ b. The inner product is denoted by a, b . We abuse this notation slightly: for any a, R ∈ F w×w , let a, R = w i=1 w j=1 a ij R ij .

Arithmetic branching programs
An arithmetic branching program (ABP) is a directed graph with + 1 layers of vertices (V 0 , V 1 , . . . , V ). The layers V 0 and V each contain only one vertex, the start node v 0 and the end node v , respectively. The edges are only going from the vertices in the layer V i−1 to the vertices in the layer V i , for any i ∈ [d]. All the edges in the graph have weights from F[x], for some field F. The length of an ABP is the length of a longest path in the ABP, i.e. . An ABP has width w, if |V i | ≤ w for all 1 ≤ i ≤ − 1.
For an edge e, let us denote its weight by W (e). For a path p, its weight W (p) is defined to be the product of weights of all the edges in it, The polynomial A(x) computed by the ABP is the sum of the weights of all the paths from v 0 to v ,

Let the set of nodes in
Here we use the convention that W (u, v) = 0 if (u, v) is not an edge in the ABP.

Read-once oblivious arithmetic branching programs
An ABP is called a read-once oblivious ABP (ROABP) if the edge weights in every layer are univariate polynomials in the same variable, and every variable occurs in at most one layer. Hence, the length of an ROABP is n, the number of variables. The entries in the matrix D i defined above come from F[x π(i) ], for all i ∈ [n], where π is a permutation on the set [n]. The order (x π(1) , x π(2) , . . . , x π(n) ) is said to be the variable order of the ROABP.
We will view D i as a polynomial in the variable x π(i) , whose coefficients are w-dimensional vectors or matrices. Namely, for an exponent a = (a 1 , a 2 , . . . , a n ), the coefficient of x The read once property gives us an easy way to express the coefficients of the polynomial A(x) computed by an ROABP. (2) ) · · · D n (x π(n) ) computed by an ROABP, we have We also consider matrix polynomials computed by an ROABP. A matrix polynomial A(x) ∈ F w×w [x] is said to be computed by an ROABP if A = D 1 D 2 · · · D n , where D i ∈ F w×w [x π(i) ] for i = 1, 2, . . . , n and some permutation π on [n]. Similarly, a vector polynomial (1) ] and D i ∈ F w×w [x π(i) ] for i = 2, . . . , n. Usually, we will assume that an ROABP computes a polynomial in F[x], unless mentioned otherwise.
Let A(x) be the polynomial computed by an ROABP and let y and z be a partition of the variables x such that y is a prefix of the variable order of the ROABP. Recall from equation (1) that A (y,a) ∈ F[z] is the coefficient of monomial y a in A(x). Nisan [21] showed that for every prefix y, the dimension of the set of coefficient polynomials A (y,a) is bounded by the width of the ROABP 2 . This holds in spite of the fact that the number of these polynomials is large.
. , x n ) be the remaining variables of x. Define P (y) = D 1 D 2 · · · D k and Q(z) = D k+1 D k+2 · · · D n . Then P and Q are vectors of length w, We get the following generalization of equation (2): for any a ∈ {0, 1, . . . , d} k , the coefficient A (y,a) ∈ F[z] of monomial y a can be written as That is, every A (y,a) is in the F-span of the polynomials Q 1 , Q 2 , . . . , Q w . Hence, the claim follows.
Observe that equation (3) tells us that the polynomials A (y,a) can also be computed by an ROABP of width w: by equation (2), we have coeff Pi (y a ) = xi∈y coeff Di (x ai i ). Hence, in the ROABP for A we simply have to replace the matrices D i which belong to P by the coefficient matrices coeff Di (x ai i ). Here, y is a prefix of x. But this is not necessary for the construction to work. The variables in y can be arbitrarily distributed in x. We summarize the observation in the following lemma. For a general polynomial, the dimension considered in Lemma 2.2 can be exponentially large in n. We will next show the converse of Lemma 2.2: if this dimension is small for a polynomial then there exists a small width ROABP for that polynomial. Hence, this property characterizes the class of polynomials computed by ROABPs. Forbes et al. [11,Section 6] give a similar characterization in terms of evaluation dimension, for polynomials which can be computed by an ROABP, in any variable order. On the other hand, we work with a fixed variable order.
As a preparation to prove this characterization we define a characterizing set of dependencies of a polynomial A(x) of individual degree d, with respect to a variable order (x 1 , x 2 , . . . , x n ). This set of dependencies will essentially give us an ROABP for A in the variable order (x 1 , x 2 , . . . , x n ). The definition of span k (A) is not unique. For our purpose, it does not matter which of the possibilities we take, we simply fix one of them. We do not require that span k (A) is of minimal size, i.e. the polynomials associated with span k (A) constitute a basis for the polynomials associated with depend k (A). This is because in the whitebox test in Section 3, we will efficiently construct the sets span k (A), and there we cannot guarantee to obtain a basis. We will see that it suffices to have | span k (A)| ≤ w. It follows that | depend k+1 (A)| ≤ w(d + 1). Note that for k = n, we have y n = x and therefore A (y n ,a) = coeff A (x a ) is a constant for every a. Hence, the coefficient space has dimension one in this case, and thus | span n (A)| = 1. Now we are ready to construct an ROABP for A.
Then there exists an ROABP of width w for A(x) in the variable order (x 1 , x 2 , . . . , x n ).
Proof. To keep the notation simple, we assume 3 that | span k (A)| = w for each 1 ≤ k ≤ n − 1. The argument would go through even when | span k (A)| < w. Let span k (A) = {a k,1 , a k,2 , . . . , a k,w } and span n (A) = {a n,1 }.
To prove the claim, we construct matrices The matrices are constructed inductively such that for k = 1, 2 . . . , n − 1, To construct D 1 ∈ F[x 1 ] 1×w , consider the equation Recall that depend From equations (5) and (6) we get, To We know that for each 1 ≤ i ≤ w, Observe that (a k−1,i , j) is just an extension of a k−1,i and thus belongs to depend k (A). Hence, there exists a set of constants From equations (9) and (10), for each 1 ≤ i ≤ w we get . Then D k is the desired matrix in equation (8).
Finally, we obtain D n ∈ F w×1 [x n ] in an analogous way. Instead of equation (8) we consider the equation Recall that A (y n ,an,1) ∈ F is a constant that can be absorbed into the last matrix D n , i.e. we define D n = D n A (y n ,an,1) . Combining equations (7), (8), and (11), we get Consider the polynomial P k defined as the product of the first k matrices D 1 , D 2 , . . . , D k from the above proof; P k (y k ) = D 1 D 2 · · · D k . We can write P k as where coeff P k (y a k ) is a vector in F 1×w . We will see next that it follows from the proof of Lemma 2.5 that the coefficient space of P k , i.e., span F {coeff P k (y a k ) | a ∈ {0, 1, . . . , d} k } has full rank w.
Then for any Proof. In the construction of the matrices D k in the proof of Lemma 2.5, consider the special case in equations (6) and (10) (6) and (10) can be chosen to be e , i.e. (γ i,j,h ) h = e . By the definition of matrix D k , vector e becomes the i-th row of D k for the exponent j, i.e., coeff D k (i,·) (x j k ) = e . This shows the claim for k = 1. For larger k, it follows by induction because for

Whitebox Identity Testing
We will use the characterization of ROABPs provided by Lemmas 2.2 and 2.5 in Section 3.1 to design a polynomial-time algorithm to check if two given ROABPs are equivalent. This is the same problem as to check whether the sum of two ROABPs is zero. In Section 3.2, we extend the test to check whether the sum of constantly many ROABPs is zero.

Equivalence of two ROABPs
Let A(x) and B(x) be two polynomials of individual degree d, given by two ROABPs. If the two ROABPs have the same variable order then one can combine them into a single ROABP which computes their difference. Then one can apply the test for one ROABP (whitebox [22], blackbox [3]). So, the problem is non-trivial only when the two ROABPs have different variable order. W.l.o.g. we assume that A has order (x 1 , x 2 , . . . , x n ). Let w bound the width of both ROABPs. In this section we prove that we can find out in polynomial time whether The idea is to determine the characterizing set of dependencies among the partial derivative polynomials of A, and verify that the same dependencies hold for the corresponding partial derivative polynomials of B. By Lemma 2.5, these dependencies essentially define an ROABP. Hence, our algorithm is to construct an ROABP for B in the variable order of A. Then it suffices to check whether we get the same ROABP, that is, all the matrices D 1 , D 2 , . . . , D n constructed in the proof of Lemma 2.5 are the same for A and B. We give some more details.

Identity Testing for Sum of ROABPs
Consider the set of vectors this is a small set. Therefore we can efficiently compute the coefficients γ a for every b ∈ depend k (A) . Note that by equation (12) we have the same dependencies for the polynomials A (y k ,b) . That is, with the same coefficients γ a , we can write Verifying the dependencies for B.
We want to verify that the dependencies in equation (13) computed for A hold as well for B, i.e. that for all k ∈ [n] and b ∈ depend k (A), Recall that y k = (x 1 , x 2 , . . . , x k ) and the ROABP for B has a different variable order. By Lemma 2.3, every polynomial B (y k ,a) has an ROABP of width w and the same order on the remaining variables as the one given for B. It follows that each of the w + 1 polynomials that occur in equation (14) has an ROABP of width w and the same variable order. Hence, we can construct one ROABP for the polynomial Simply identify all the start nodes and all the end nodes and put the appropriate constants γ a to the weights. Then we get an ROABP of width w(w + 1). In order to verify equation (14), it suffices to make a zero-test for this ROABP. This can be done in polynomial time [22].

Correctness.
Clearly, if equation (14) fails to hold for some k and b, then A = B. So assume that equation (14) holds for all k and b. Recall Lemma 2.5 and its proof. There we constructed an ROABP just from the characterizing dependencies of the given polynomial. Hence, the construction applied to B will give an ROABP of width w for B with the same variable order (x 1 , x 2 , . . . , x n ) as for A. The matrices D k will be the same as for A because their definition uses only the dependencies provided by equation (14), and they are the same as for A in equation (13).
Note that when we construct the last matrix D n by equation (11), an,1) . The dependencies define matrix D n . Therefore, for B we will obtain B(x) = D 1 D 2 · · · D n B (y n ,an,1) . Since we also check that we get the same matrix D n for A and B, we also have A (y n ,an,1) = B (y n ,an,1) , and therefore A(x) = B(x). This proves Theorem 3.1.

Sum of constantly many ROABPs
Our goal is to test whether A 1 + A 2 + · · · + A c = 0. Here again, the question is interesting only when the ROABPs have different variable orders. We show how to reduce the problem to the case of the equivalence of two ROABPs from the previous section. For constant c this will lead to a polynomial-time test.
We start by rephrasing the problem as an equivalence test. Let A = −A 1 and B = A 2 + A 3 + · · · + A c . Then the problem has become to check whether A = B. Since A is computed by a single ROABP, we can use the same approach as in Section 3.1. Hence, we get again the dependencies from equation (13) for A. Next, we have to verify these dependencies for B, i.e. equation (14). Now, B is not given by a single ROABP, but is a sum of c − 1 ROABPs. For every k ∈ [n] and b ∈ depend k (A), define the polynomial Q = B (y k ,b) − a∈span k (A) γ a B (y k ,a) . By the definition of B we have As explained in the previous section for equation (15), for each summand in equation (16) we can construct an ROABP of width w(w + 1). Thus, Q can be written as a sum of c − 1 ROABPs, each having width w(w + 1). To test whether Q = 0, we recursively use the same algorithm for the sum of c − 1 ROABPs. The recursion ends when c = 2. Then we directly use the algorithm from Section 3.1.
To bound the running time of the algorithm, let us see how many dependencies we need to verify. There is one dependency for every k ∈ [n] and every b ∈ depend k (A). Since |depend k (A)| ≤ w(d + 1), the total number of dependencies verified is ≤ nw(d + 1). Thus, we get the following recursive formula for T (c, w), the time complexity for testing zeroness of the sum of c ≥ 2 ROABPs, each having width w. For c = 2, we have T (2, w) = poly(n, d, w), and for c > 2, T (c, w) = nw(d + 1) · T (c − 1, w(w + 1)) + poly(n, d, w).

Blackbox Identity Testing
In this section, we extend the blackbox PIT of Agrawal et. al [3] for one ROABP to the sum of constantly many ROABPs. In the blackbox model we are only allowed to evaluate a polynomial at various points. Hence, for PIT, our task is to construct a hitting-set. For polynomials computed by a sum of c ROABPs, a hitting-set is defined similarly. Here, H = H(n, d, w, c) additionally depends on c.
For a hitting-set to exist, we will need enough points in the underlying field F. Henceforth, we will assume that the field F is large enough such that the constructions below go through (see [1] for constructing large F). To construct a hitting-set for a sum of ROABPs we use the concept of low support rank concentration defined by Agrawal, Saha, and Saxena [4]. A polynomial A(x) has low support concentration if the coefficients of its monomials of low support span the coefficients of all the monomials.

Definition 4.2 ([4]). A polynomial
The above definition applies to polynomials over any F-vector space, e.g.
is a non-zero polynomial that has -support concentration, then there are nonzero coefficients of support < . Then the assignments of support < are a hitting-set for A(x). Hence, when we have low support concentration, this solves blackbox PIT. However, not every polynomial has a low support concentration, for example A(x) = x 1 x 2 · · · x n is not n-concentrated. However, Agrawal, Saha, and Saxena [4] showed that low support concentration can be achieved through an appropriate shift of the variables.

Definition 4.4. Let A(x) be an n-variate polynomial and
Note that a shift is an invertible process. Therefore it preserves the coefficient space of a polynomial.
In the above example, we shift every variable by 1. That is, we consider A(x + 1) = (x 1 + 1)(x 2 + 1) · · · (x n + 1). Observe that A(x + 1) has 1-support concentration. Agrawal, Saha, and Saxena [4] provide an efficient shift that achieves low support concentration for polynomials computed by set-multilinear depth-3 circuits. Forbes, Saptharishi and Shpilka [9] extended their result to polynomials computed by ROABPs. However their cost is exponential in the individual degree of the polynomial. Any efficient shift for ROABPs will suffice for our purposes. Here, we will give a new shift for ROABPs with quasi-polynomial cost. Namely, in Theorem 5.6 below we present a shift polynomial f (t) ∈ F[t] n in one variable t of degree (ndw) O(log n) that can be computed in time (ndw) O(log n) . It has the property that for every n-variate polynomial A(x) ∈ F w×w [x] of individual degree d that can be computed by an ROABP of width w, the shifted polynomial A(x + f (t)) has O(log w)concentration. We can plug in as many values for t ∈ F as the degree of f (t), i.e. (ndw) O(log n) many. For at least one value of t, the shift f (t) will O(log w)-concentrate A(x + f (t)). That is, we consider f (t) as a family of shifts. The same shift also works when the ROABP computes a polynomial in The rest of the paper is organized as follows. The construction of a shift to obtain low support concentration for single ROABPs is postponed to Section 5. We start in Section 4.1 to show how the shift for a single ROABP can be applied to obtain a shift for the sum of constantly many ROABPs.

Sum of ROABPs
Let polynomial A ∈ F[x] of individual degree d have an ROABP of width w, with variable order (x 1 , x 2 , . . . , x n ). Let B ∈ F[x] be another polynomial. We start by reconsidering the whitebox test from the previous section. The dependency equations (13) and (14) were used to construct an ROABP for B ∈ F[x] in the same variable order as for A, and the same width. If this succeeds, then the polynomial A + B has one ROABP of width 2w. Since there is already a blackbox PIT for one ROABP [3], we are done in this case. Hence, the interesting case that remains is when B does not have an ROABP of width w in the variable order of A.
Let k ∈ [n] be the first index such that the dependency equations (13) for A do not carry over to B as in equation (14). In the following Lemma 4.5 we decompose A and B into a common part up to layer k, and the remaining different parts. That is, for y k = (x 1 , x 2 , . . . , x k ) and z k = (x k+1 , . . . , x n ), we obtain A = RP and B = RQ, where R ∈ F[y k ] 1×w and P, Q ∈ F[z k ] w ×1 , for some w ≤ w(d + 1). The construction also implies that the coefficient space of R has full rank w . Since the dependency equations (13) for A do not fulfill equation (14) for B, we get a constant vector Γ ∈ F 1×w such that ΓP = 0 but ΓQ = 0. From these properties we will see in Lemma 4.6 below that we get low support concentration for A + B when we use the shift constructed in Section 5 for one ROABP.  {a k,1 , a k,2 , . . . , a k,w } has size w for each 1 ≤ k ≤ n − 1, and span n (A) = {a n,1 }. Then we have In the proof of Lemma 2.5 we consider the dependency equations for A and carry them over to B. By the assumption of the lemma, there is no ROABP of width w for B now.
Therefore there is a smallest k ∈ [n] where a dependency for A is not followed by B. That is, the coefficients γ a computed for equation (13) do not fulfill equation (14) for B. Since the dependencies carry over up to this point, the construction of the matrices D 1 , D 2 , . . . , D k−1 work out fine for B. Hence, by equation (4), we can write Since the difference between A and B occurs at x k , we consider all possible extensions from x k−1 . That is, by equation (9), for every i ∈ [w] we have Recall that our goal is to decompose polynomial A into A = RP . We first define polynomial P as the vector of coefficient polynomials of all the one-step extensions of span k−1 (A), i.e., P = A (y k ,(a k−1,i ,j)) 1≤i≤w, 0≤j≤d is of length w = w(d + 1). Written explicitly, this is From equation (19) we get that Thus, equation (17) can be written as A(x) = D 1 D 2 · · · D k−1 E k P . Hence, when we define R(y k ) = D 1 D 2 · · · D k−1 E k then we have A = RP as desired. By an analogous argument we get B = RQ for Q = B (y k ,(a k−1,i ,j)) 1≤i≤w, 0≤j≤d .
For the second claim of the lemma let b ∈ depend k (A) such that the dependency equation (13) for A is fulfilled, but not equation (14) for B. Define Γ ∈ F 1×w to be the vector that has the values γ a used in equation (13) at the position where P has entry A (y k ,a) , and zero at all other positions. Then supp(Γ) ≤ w + 1 and we have ΓP = 0 and ΓQ = 0.
It remains to show that the coefficient space of R has full rank. By Corollary 2.6, the coefficient space of D 1 D 2 · · · D k−1 has full rank w. Namely, for any ∈ [w], the coefficient of the monomial y a k−1, k−1 is e , the -th standard unit vector. Therefore the coefficient of for 1 ≤ ≤ w and 0 ≤ j ≤ d. By the definition of E k , we get coeff R (y a k−1, ,j k ) = e ( −1)(d+1)+j+1 . Thus, the coefficient space of R has full rank w . Lemma 4.5 provides the technical tool to obtain low support concentration for the sum of several ROABPs by the shift developed for a single ROABP. We start with the case of the sum of two ROABPs.

Let f w,2 (t) ∈ F[t] n be a shift that w,2 -concentrates any polynomial (or matrix polynomial) that is computed by an
Then Proof. If B can be computed by an ROABP of width w in the same variable order as the one for A, then there is an ROABP of width 2w that computes A + B. In this case the lemma follows because 2w ≤ W w,2 . So let us assume that there is no such ROABP for B. Thus the assumption from Lemma 4.5 is fulfilled. Hence, we have a decomposition of A and B at the k-th layer into A(x) = R(y k )P (z k ) and B(x) = R(y k )Q(z k ), and there is a vector Γ ∈ F 1×w such that ΓP = 0 and ΓQ = 0, where w = (d + 1)w and supp(Γ) ≤ w + 1. Define R , P , Q as the polynomials R, P, Q shifted by f w,2 , respectively. Since ΓP = 0, we also have ΓP = 0.
By the definition of R, there is an ROABP of width w that computes R. Since w ≤ W w,2 , polynomial R is w,2 -concentrated by the assumption of the lemma.
We argue that also ΓQ is w,2 -concentrated: . By Lemma 2.3, from the ROABP for B we get an ROABP for each Q i of the same width w and the same variable order. Therefore we can combine them into one ROABP that computes Since ΓQ = 0 and ΓQ is w,2 -concentrated, there exists at least one Recall that the coefficient space of R has full rank w . Since a shift preserves the coefficient space, also R has a full rank coefficient space. Because R is w,2 -concentrated, already the coefficients of the < w,2 -support monomials of R have full rank w . That is, for M w,2 = {a ∈ {0, 1, . . . , d} k | supp(a) < w,2 }, we have rank F(t) {coeff R (y a k ) | a ∈ M w,2 } = w . Therefore, we can express Γ as a linear combination of these coefficients, where α a is a rational function in F(t), for a ∈ M w,2 . Hence, from equation (20) we get Since supp(a, b) = supp(a) + supp(b) < 2 w,2 , it follows that there is a monomial in (A + B) of support < 2 w,2 with a nonzero coefficient. In other words, (A + B) is 2 w,2concentrated.
In Section 5, Theorem 5.6, we will show that the shift polynomial f w,2 (t) ∈ F[t] n used in Lemma 4.6 can be computed in time (ndw) O(log n) . The degree of f w,2 (t) has the same bound. Recall that when we say that we shift by f w,2 (t), we actually mean that we plug in values for t up to the degree of f w,2 (t). That is, we have a family of (ndw) O(log n) shifts, and at least one of them will give low support concentration. By Lemma 4.3, we get for each t, a potential hitting-set H t of size (nd) O( w,2) = (nd) O(log dw) , The final hitting-set is the union of all these sets, i.e. H = t H t , where t takes (ndw) O(log n) distinct values. Hence, we have the following main result. We extend Lemma 4.6 to the sum of c ROABPs.

Lemma 4.8. Let
n be a shift that w,c -concentrates any polynomial (or matrix polynomial) that is computed by an ROABP of width W w,c . Then Proof. The proof is by induction on c. Lemma 4.6 provides the base case c = 2. For the induction step let c ≥ 3. We follow the proof of Lemma 4.6 with A = A 1 and B = c j=2 A j . Consider again the decomposition of A and B at the k-th layer into A = RP and B = RQ, and let Γ ∈ F 1×w such that ΓP = 0 and ΓQ = 0, where w = (d + 1)w and supp(Γ) ≤ w + 1.
The only difference to the proof of Lemma 4.6 is Q = [Q 1 Q 2 · · · Q w ] T . Recall from Lemma 4.5 that Q i = B (y k ,ai) = c j=2 A j (y k ,ai) , for a i ∈ depend k (A). Hence, By Lemma 2.3, ΓQ can be computed by a sum of c − 1 ROABPs, each of width w(w + 1) ≤ 2w 2 = w , because supp(Γ) ≤ w + 1. Our definition of W w,c was chosen such that Hence, f w,c (t) is a shift that w ,c−1 -concentrates any polynomial that is computed by an ROABP of width W w ,c−1 . By the induction hypothesis, we get that ΓQ = ΓQ(x+f w,c (t)) is (c − 1) w ,c−1 -concentrated, which is same as (c − 1) w,c -concentrated. Now we can proceed as in the proof of Lemma 4.6 and get that (A + B) = c j=1 A j has a monomial of support < w,c + (c − 1) w,c = c w,c .
We combine the lemmas similarly as for Theorem 4.7 and obtain our main result for the sum of constantly many ROABPs.

Concentration in matrix polynomials
As a by-product, we show that low support concentration can be achieved even when we have a sum of matrix polynomials, each computed by an ROABP. For a matrix polynomial A(x) ∈ F w×w [x], an ROABP is defined similar to the standard case. We have layers of nodes V 0 , V 1 , . . . , V n connected by directed edges from V i−1 to V i . Here, also V 0 = {v 0,1 , v 0,2 , . . . , v 0,w } and V n = {v n,1 , v n,2 , . . . , v n,w } consist of w nodes. The polynomial A i,j (x) at position (i, j) in A(x) is the polynomial computed by the standard ROABP with start node v 0,i and end node v n,j .
Note that Definition 4.2 for -support concentration can be applied to polynomials over any F-algebra.

Corollary 4.10. Let
is an n-variate matrix polynomials of individual degree d, each computed by an ROABP of width w. Let w,c be defined as in Lemma 4.8. Then Proof. Let α ∈ F w×w and consider the dot-product α, . This polynomial can be computed by an ROABP of width w 2 : we take the ROABP of width w for A i and make w copies of it, and two new nodes s and t. We add the following edges. Connect the new start node s to the h-th former start node of the h-th copy of the ROABP by edges of weight one, for all 1 ≤ h ≤ w. Connect the j-th former end node of the h-th copy of the ROABP to the new end node t by an edge of weight α h,j , for all 1 ≤ h, j ≤ w. The resulting ROABP has width w 2 and computes α, A i . Now consider the polynomial α, A = α, A 1 + α, A 2 +· · ·+ α, A c . It can be computed by a sum of c ROABPs, each of width w 2 , for every α ∈ F w×w . Hence, by Lemma 4.8, the polynomial α, A (x + f w 2 ,c ) is c w 2 ,c -concentrated, for every α ∈ F w×w . By Lemma 4.11 below, it follows that A(x + f w 2 ,c ) is c w 2 ,c -concentrated.
The following lemma is also of independent interest.

Lemma 4.11. Let A ∈ F w×w [x] be an n-variate polynomial and f (t) be a shift. Then
Hence, there exists an α ∈ F w×w such that α, coeff A (x a ) = 0, for all a with supp(a) < , but α, A = 0. We thus found an α ∈ F w×w such that α, A (x + f (t)) is not -concentrated.
For the other direction, let A(x+f ) be -concentrated. Hence, any coefficient coeff A (x a ) can be written as a linear combination of the small support coefficients, for some γ b ∈ F. Hence, for any α ∈ F w×w , we also have That is, α, A (x + f (t)) is -concentrated.

Low Support Concentration in ROABPs
Recall that a polynomial A(x) over an F-algebra A is called low-support concentrated if its low-support coefficients span all its coefficients. We show an efficient shift which achieves concentration in matrix polynomials computed by ROABPs. We use the quasi-polynomial size hitting-set for ROABPs given by Agrawal et al. [3]. Their hitting-set is based on a basis isolating weight assignment which we define next.
Recall  N and a = (a 1 , a 2 , . . . , a n ) ∈ M , let the weight of a be w(a) = n i=1 w(i)a i . Let A k be a k-dimensional algebra over field F.

Definition 5.1. A weight function w : [n] → N is called a basis isolating weight assignment for a polynomial
Agrawal et al. [3,Lemma 8] presented a quasi-polynomial time construction of such a weight function for any polynomial A(x) ∈ F w×w [x] computed by an ROABP. The hittingset is then defined by points (t w(1) , t w(2) , . . . , t w(n) ) for poly(n, d, w) log n many t's. Our approach now is to use this weight function for a shift of A(x) by t w(i) n i=1 . Let A (x) denote the shifted polynomial, We will prove that A has low support concentration.
The coefficients of A are linear combinations of coefficients of A, which are given by the equation bi ai for any a, b ∈ N n . Equation (21)  The inverse of D is the diagonal matrix given by D −1 (a, a) = t −w(a) . Now equation (21) becomes As shifting is an invertible operation, the matrix T is also invertible and rank(C ) = rank(C).

Lemma 5.2 (Isolation to concentration). Let A(x) be a polynomial over a k-dimensional algebra A k . Let w be a basis isolating weight assignment for A(x). Then
. We reconsider equation (22)  To show that A is -concentrated, we need to prove that rank(C ) = rank(C). By equation (22), matrix C can be written as C = D −1 T DC. Since D and D −1 are diagonal matrices, they have full rank. Hence, it suffices to show that rank(T DC) = rank(C).
W.l.o.g. we assume that the order of the rows and columns in all the above matrices that are indexed by M or M is according to increasing weight w(a) of the indices a. The rows with the same weight can be arranged in an arbitrary order. Now, recall that w is a basis isolating weight assignment. Hence, there exists a set S ⊆ M such that the coefficients coeff A (b), for b ∈ S, span all coefficients coeff A (a), for a ∈ M . In terms of the coefficient matrix C, for any a ∈ M we can write ·). By (23), for every a ∈ M , there is a vector γ a = (γ a,1 , γ a,2 , . . . , γ a,k ) ∈ F k such that C(a, ·) = k j=1 γ a,j C 0 (j, ·). Let Γ = (γ a,j ) a,j be the M × [k ] matrix with these vectors as rows. Then we get Observe that the s i -th row of Γ is simply e i , the i-th standard unit vector. By (23), the coefficient C(s i , ·) is used to express C(a, ·) only when w(a) > w(s i ). Recall that the rows of the matrices indexed by M , like Γ, are in order of increasing weight of the index. Therefore, when we consider the i-th column of Γ from top, the entries are all zero down to row s i , where we hit on the one from e i , Γ(s i , i) = 1 and ∀ a = s i , w(a) ≤ w(s i ) =⇒ Γ(a, i) = 0 .
Recall that our goal is to show rank(T DC) = rank(C). For this, it suffices to show that the M × k matrix R = T DΓ has full column rank k , because then we have rank(T DC) = rank(T DΓC 0 ) = rank(RC 0 ) = rank(C 0 ) = rank(C).
To show that R has full column rank k , observe that the j-th column of R can be written as By (24), the term with the lowest degree in equation (25) is t w(sj ) . By lc(R(·, j)) we denote the coefficient of the lowest degree term in the polynomial R(·, j). Because Γ(s j , j) = 1, we have lc(R(·, j)) = T (·, s j ) .
We define the M × [k ] matrix R 0 whose j-th column is lc(R(·, j)), i.e. R 0 (·, j)) = T (·, s j ). We will show in Lemma 5.3 below that the columns of matrix T indexed by the set S are linearly independent. Therefore the k columns of R 0 are linearly independent. Hence, there are k rows in R 0 such that its restriction to these rows, say R 0 , is a square matrix with nonzero determinant. Let R denote the restriction of R to the same set of rows. Now observe that the lowest degree term in det(R ) has coefficient precisely det(R 0 ), i.e., lc(det(R )) = det(R 0 ). This is because the lowest degree term in det(R ) has degree k j=1 w(s j ), and this degree can only be obtained when the degree w(s j ) term is taken from the j-th column, for all j. We conclude that det(R ) = 0 and hence R has full column rank.
It remains to show that the k ≤ k columns of matrix T indexed by the set S are linearly independent. In fact, we will show that any k = 2 − 1 columns of T are independent. Proof. Let S ⊆ M now be any set of size k = 2 − 1. Let T ,k be the M × S submatrix of T that consists of the columns indexed by S. To prove the lemma we will show that for (21) we get that for any a ∈ M , Hence, T ,k v gives all the coefficients of V (x) of support < . Now it remains to show that at least one of these coefficients is nonzero. We show this in our next claim about concentration in sparse polynomials, which is also of independent interest.
be a non-zero n-variate polynomial with sparsity bounded by 2 − 1. Then V (x) = V (x + 1) has a nonzero coefficient of support < .
We prove the claim by induction on the number of variables n. For n = 1, polynomial V (x) is univariate, i.e. all monomials in V (x) have support 1. Hence, for > 1 it suffices to show that V (x) = 0. But this is equivalent to V (x) = 0, which holds by assumption. If = 1, then V (x) is a univariate polynomial with exactly one monomial, and therefore V (x + 1) has a nonzero constant part. Now assume that the claim is true for n − 1 and let V (x) have n variables. Let x n−1 denote the set of first n − 1 variables. Let us write −1 + 1) be the shifted polynomial, for every 0 ≤ i ≤ d. We consider two cases: Case 1: There is exactly one index i ∈ [0, d] for which U i = 0. Then U i has sparsity ≤ 2 − 1. Because U i is an (n − 1)-variate polynomial, U i has a nonzero coefficient of support < by inductive hypothesis.
Thus, V (x) = (x n + 1) i U i also has a nonzero coefficient of support < . Case 2: There are at least two U i 's which are nonzero. Then there is at least one index in i ∈ [0, d] such that U i has sparsity 2 −1 − 1. And hence, by the inductive hypothesis, U i has a nonzero coefficient of support < − 1. Consider the largest index j such that U j has a nonzero coefficient of support < − 1. Let the corresponding monomial be x a n−1 . Now, By our choice of j we have coeff U j (x a n−1 ) = 0 and coeff U r (x a n−1 ) = 0, for r > j. Hence, coeff V (x a n−1 x j n ) = 0. The monomial x a n−1 x j n has support < , which proves our claim and the lemma. By Lemma 5.2, we now have an alternative PIT for one ROABP because we could simply try all f i ∈ F for low support concentration, and we know that at least one will work. However, in Lemmas 4.6 and 4.8 we apply the shift to several ROABPs simultaneously, and we have no guarantee that one of the shifts works for all of them. We solve this problem by combining the n-tuples in F into one single shift that works for every ROABP.
Let L(y, t) ∈ F[y, t] n be the Lagrange interpolation of F. That is, for all j ∈ [n], where α i is an arbitrary unique field element associated with i, for all i ∈ [N ]. (Recall that we assume that the field F is large enough that these elements exist.) Note that L j | y=αi = f i,j . Thus, L| y=αi = f i . Also, deg y (L j ) = N − 1 and deg t (L j ) ≤ D.
Lemma 5.5. Let A(x) be a n-variate polynomial over a k-dimensional F-algebra A k and F be a family of n-tuples, such that there exists an f ∈ F such that A (x, t) = A(x + f ) ∈ A k (t)[x] is -concentrated. Then, A (x, y, t) = A(x + L) ∈ A k (y, t)[x] is -concentrated.
Proof. Let rank F {coeff A (x a ) | a ∈ M } = k , for some k ≤ k, and M = {a ∈ M | supp(a) < }. We need to show that rank F(y,t) {coeff A (x a ) | a ∈ M } = k .
Since A (x) is -concentrated, we have that rank F(t) {coeff A (x a ) | a ∈ M } = k . Recall that A (x) is an evaluation of A at y = α i , i.e. A (x, t) = A (x, α i , t). Thus, for all a ∈ M we have coeff A (x a ) = coeff A (x a )| y=αi .
Let C ∈ F[t] k×|M | be the matrix whose columns are coeff A (x a ), for a ∈ M . Let similarly C ∈ F[y, t] k×|M | be the matrix whose columns are coeff A (x a ), for a ∈ M . Then we have C = C | y=αi .
Using the Lagrange interpolation, we can construct a single shift, which works for all ROABPs of width ≤ w.  + f (t)) is log(w 2 + 1)-concentrated.
Proof. Recall that for any polynomial A(x) ∈ F w×w [x] computed by an ROABP, at least one tuple in the family {f 1 , f 2 , . . . , f N } obtained from [3,Lemma 8], gives log(w 2 + 1)concentration. By Lemma 5.5, the Lagrange interpolation L(y, t) of {f 1 , f 2 , . . . , f N } has yand t-degrees (ndw) O(log n) . After shifting an n-variate polynomial of individual degree d by L(y, t), its coefficients will be polynomials in y and t, with degree d = dn(ndw) O(log n) . Consider the determinant polynomial det(C (R, ·)) from Lemma 5.5. As the set of coefficients of polynomial A(x) have rank bounded by w 2 , det(C (R, ·)) has degree bounded by d = w 2 d .
Note that when we replace y by t d +1 , this will not affect the non-zeroness of the determinant, and hence, the concentration is preserved. Thus, f = L(t d +1 , t) is an n-tuple of univariate polynomials in t that fulfills the claim of the theorem. Now, consider the case when the ROABP computes a polynomial A(x) ∈ F 1×w [x]. It is easy to see that there exist S ∈ F 1×w and B ∈ F w×w [x] computed by a width-w ROABP such that A = SB. We know that B(x + f (t)) has log(w 2 + 1)-concentration. As multiplying by S is a linear operation, one can argue as in the proof of Lemma 4.11 that any linear dependence among coefficients of B(x + f (t)) also holds among coefficients of A(x + f (t)). Hence, A(x + f (t)) has log(w 2 + 1)-concentration. A similar argument would work when A(x) ∈ F[x], by writing A = SBT , for some S ∈ F 1×w and T ∈ F w×1 .

Discussion
The first question is whether one can make the time complexity for PIT for the sum of c ROABPs proportional to w O(c) instead of w O(2 c ) . This blow up happens because, when we want to combine w + 1 partial derivative polynomials given by ROABPs of width w, we get an ROABP of width O(w 2 ). There are examples where this bound seems tight. So, a new property of sum of ROABPs needs to be discovered. It also needs to be investigated if these ideas can be generalized to work for sum of more than constantly many ROABPs, or depth-3 multilinear circuits.
As mentioned in the introduction, the idea for equivalence of two ROABPs was inspired from the equivalence of two read once boolean branching programs (OBDD). It would be interesting to know if there are concrete connections between arithmetic and boolean branching programs. In particular, can ideas from identity testing of an ROABP be applied to construct pseudo-randomness for OBDD. E.g. the less investigated model, XOR of constantly many OBDDs can be checked for unsatisfiability by modifying our techniques.