Abstract 1 Introduction and overview 2 Preliminaries 3 List decoding/recovery of 𝚀-multiplicity codes References

Polynomials, Divided Differences, and Codes

S. Venkitesh ORCID Tel Aviv University, Israel
Abstract

Multiplicity codes (Kopparty et al., J. ACM 2014) are multivariate polynomial codes where the codewords are described by evaluations of polynomials (with a degree bound) and their derivatives up to some order (the multiplicity parameter), on a suitably chosen affine set of points. While efficient decoding algorithms were known in some special cases of point sets, by a reduction to univariate multiplicity codes, a general algorithm for list decoding up to the distance of the code when the point set is an arbitrary finite grid, was obtained only recently (Bhandari et al., IEEE TIT 2023). This required the characteristic of the field to be zero or larger than the degree bound, which is somewhat necessary since list decoding up to distance with small output list size is not possible when the characteristic is significantly smaller than the degree.

In this work, we present an alternative construction based on divided differences of polynomials, that conceptually resembles the classical multiplicity codes but has good list decodability “insensitive to the field characteristic”. We obtain a simple algorithm that list decodes this code up to distance for arbitrary finite grids over all finite fields. Our construction can also be interpreted as a folded Reed-Muller code, which may be of independent interest.

Keywords and phrases:
Error-correcting code, polynomial code, Reed-Solomon code, Reed-Muller code, folded Reed-Solomon code, folded Reed-Muller code, multiplicity code, divided difference, q-derivative, polynomial method, list decoding, list decoding capacity, linear algebraic list decoding
Funding:
S. Venkitesh: Len Blavatnik and the Blavatnik Family Foundation.
Copyright and License:
[Uncaptioned image] © S. Venkitesh; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Mathematics of computing Coding theory
; Theory of computation Error-correcting codes
Related Version:
Full Version: https://doi.org/10.48550/arXiv.2412.01755
Acknowledgements:
The author thanks Dean Doron, Mrinal Kumar, Noga Ron-Zewi, Amnon Ta-Shma, and Mary Wootters for patiently listening to and giving feedback on presentations of this work at different times. A part of this work was done while the author was a postdoc at the Department of Computer Science, University of Haifa, and was participating in the program on Error Correcting Codes and Computation (2024) at the Simons Institute for the Theory of Computing, UC Berkeley.
Funding:
A part of this work was done while the author was supported in part by BSF grant 2021683, ISF grant 735/20, and by the European Union (ERC, ECCC, 101076663).
Editors:
Raghu Meka

1 Introduction and overview

Codes based on polynomial evaluations have found widespread applications, both implicitly and explicitly, in theoretical computer science, combinatorics, and allied areas. The prototypical univariate polynomial code is the Reed-Solomon (RS) code [57, 58], wherein the codewords are evaluations of degree bounded univariate polynomials on a set of points, and the prototypical multivariate polynomial code is the analogously defined Reed-Muller (RM) code [52, 57, 39, 68, 14]111The RM code was considered over the binary field by [52, 57], and over larger fields by [39, 68, 14]. Despite several decades of research, the decodability properties of these codes are far from being well-understood.

1.1 The list decoding problem

In this work, we are specifically interested in the list decoding problem [16, 69], where the aim is to output a list of close codewords for any given received word efficiently. The main parameter of interest is the list decoding radius, which represents the fraction of disagreements that we can decode from, and we would like it to be as large as possible.

Unique decoding of polynomial codes

To begin with, consider the easier setting where we fix the list decoding radius to be half the minimum distance of the code, which is precisely the setting of the unique decoding problem – in this case, there could only be at most one codeword this close to any given received word, and the aim is to efficiently find it if it exists. This problem has now been solved completely for RS codes [56, 49, 67], for univariate multiplicity codes [53], for RM codes in a special case [40, 5], and quite recently for RM codes in general [41] and for multivariate multiplicity codes [8]. What is noteworthy is that all of these algorithms are insensitive to the field characteristic. We now move on to consider larger list decoding radius.

List decoding of univariate polynomial codes

In the last decade of the 20th century, two classic works [65, 27] showed that all RS codes can be list decoded up to the Johnson bound (which is relative radius 1R, where R is the rate of the code), and also laid an algebraic algorithmic framework that has since been exploited over and over again for several other polynomial codes. Further, we do not know of explicit RS codes that can be list decoded beyond the Johnson bound, but we do know of explicit RS codes that are not list decodable significantly beyond the Johnson bound [7]. Due to recent breakthroughs [11, 26, 2], we now know that random RS codes are list decodable up to (information theoretic) capacity (that is, relative radius 1Rϵ for arbitrarily small but fixed ϵ>0, which represents getting approximately close to the information theoretic limit of list decoding), but we neither know of an explicit construction nor know of an algorithm to achieve this.

It is in this context of list decoding up to the information theoretic capacity that a folding trick has been enlightening. The works of [28, 29] showed that folded Reed-Solomon (FRS) codes (where the bits of an RS codeword are bunched into blocks of large constant size), and univariate multiplicity codes [43] (wherein the codewords are evaluations of polynomials and their derivatives up to a large constant multiplicity) can be algorithmically list decoded up to capacity! Further improvements to the parameters have been achieved in [44, 61, 63, 12] in different settings.

List decoding of multivariate polynomial codes

Moving on to multivariate polynomial codes, while the list decoding algorithms of [65, 27] generalize easily to RM codes, the decoding radius now worsens as a function of the number of variables, and attaining Johnson bound is no longer possible with these algorithms. While we know of a Johnson bound attaining algorithm [55] for RM codes when the finite grid is the full vector space 𝔽qm over the base field 𝔽q, designing a more general algorithm for arbitrary finite grids is still open!

As it turns out, the folding trick helps in the multivariate setting too. The multivariate multiplicity codes [45], which are the obviously defined multivariate analogues of univariate multiplicity codes, were shown to be list decodable up to distance (that is, up to radius δϵ, where δ is the minimum distance of the code) by [43] when the set of evaluation points is the full vector space 𝔽qm. However, just as in the case of RM codes, an analogous result for arbitrary finite grids evaded us for quite some time. Very recently, [9] finally bolstered the algebraic algorithmic framework with some simple additional ingredients and managed to design an algorithm for list decoding multivariate multiplicity codes over arbitrary finite grids up to distance.

Sensitivity of list decodability to the field characteristic

Finally, yet another crucial component that could influence the performance of the code (specifically a polynomial code) is the most fundamental underlying algebraic object – the base field over which the polynomials are considered, which is always a finite field in our discussion.

Unlike the unique decoding setting that we mentioned earlier, we do observe sensitivity to the field characteristic in the setting of the list decoding problem. Let us fix the notation that 𝔽q denotes the base field, and p denotes the characteristic of 𝔽q. On one hand, we know that when q is a large power of p, there are RS codes that cannot achieve anywhere close to list decoding capacity [7] since we can show instances of received words having superpolynomial sized output lists. On the other hand, upon large constant folding, the univariate multiplicity codes achieve capacity when p is larger than the degree parameter [29, 43], or smaller but linear in the degree [44], and the FRS codes achieve list decoding capacity [28, 29] insensitive to p. Thus, the list decodability of the FRS code is insensitive to the field characteristic, whereas the list decodability of the multiplicity code is sensitive to the field characteristic. Indeed, in the case when q is a large power of p, the large output lists shown by [7] for RS codes can be used to construct large output lists for multiplicity codes, and so the list decodability is provably poor.

Things get even more interesting in the multivariate setting, since the multivariate multiplicity codes achieve distance when p is larger than the degree parameter [55, 9], but we do not know of a field characteristic insensitive construction of a multivariate polynomial code that algorithmically achieves distance. Furthermore, we also do not know of an appropriate multivariate analogue of the field characteristic insensitive univariate FRS code. In this work, we answer both questions with a single code construction, along with a list decoding algorithm. In particular, our code subsumes the list decoding performance of the field characteristic sensitive multivariate multiplicity code [9], and also provides a natural folded Reed-Muller code construction. The motivation for our construction comes from an extremely simple yet foundational observation within the univariate world itself – the FRS code is also a multiplicity code!

1.2 FRS codes, univariate multiplicity codes, and divided differences

Fix a field 𝔽q. Consider distinct nonzero points a1,,an𝔽q. Let γ𝔽q× be a multiplicative generator, and suppose s1 such that the points γjai,j[0,s1],i[n] are all distinct. For k[sn], the degree-k Folded Reed-Solomon (FRS) code is defined by

FRSs(a1,,an;k)
={[f]FRS:=[f(a1)f(γa1)f(γs1a1)f(an)f(γan)f(γs1an)]:f(X)𝔽q[X],deg(f)<k}(𝔽qs)n.

Here, the distance function is the Hamming distance on alphabet 𝔽qs, that is, the Hamming weight 222We are only concerned with codes that are linear over the base field 𝔽q, and so the Hamming distance between two codewords c1,c2 is always equal to the Hamming weight of the codeword c1c2. of a codeword [f]FRS is the number of i[n] such that the block
[f(ai)f(γai)f(γs1ai)]𝚝 is a nonzero vector in 𝔽qs.

On the other hand, simply assuming a1,,an𝔽q are distinct and nothing more about these points, for k[sn], the degree-k (univariate) multiplicity code is defined by

Mults(a1,,an;k)
={[f]Mult:=[f(0)(a1)f(1)(a1)f(s1)(a1)f(0)(an)f(1)(an)f(s1)(an)]:f(X)𝔽q[X],deg(f)<k}(𝔽qs)n.

Here f(j)(X):=djf(X)dXj for all j[0,s1]333Typically, since we are working over finite fields, there will also be a j! scaling factor multiplied with the derivative, to prevent some pathologies arising in finite fields about higher multiplicity vanishing at a point. With this scaling factor, the derivative is usually called the Hasse derivative. We do not consider this, and implicitly assume that the characteristic is large enough so that these pathologies do not arise. Indeed, this is precisely the setting where the multiplicity codes have good list decodability. Once again, the Hamming weight of a codeword [f]Mult is the number of i[n] such that the block [f(0)(ai)f(1)(ai)f(s1)(ai)]𝚝 is a nonzero vector in 𝔽qs.

Let us begin by looking at an algebraic feature that is intrinsic to multiplicity codes. Consider any f(X)𝔽q[X] with deg(f)<k. For any a𝔽q, we get the standard Taylor expansion

f(X)=f(0)(a)+f(1)(Y)(Xa)+f(2)(a)(Xa)22!++f(k1)(a)(Xa)k1(k1)!,

provided p:=char(𝔽q)k. Is there an analogous Taylor expansion in the context of the FRS code? As it turns out, there is such an expansion, and it is even simpler – it is the expansion in terms of the Newton forward differences. Precisely, we have

f(X)=f[0](a)+f[1](a)(Xa)+f[2](a)(Xa)(Xγa)++f[k1](a)(Xa)(Xγk1a),

where

f[0](a)=f(a),f[1](a)=f(γa)f(a)(γ1)a,,f[k1](a)=f[k2](γa)f[k2](a)(γk11)a.

Notably, since γ𝔽q× is a multiplicative generator, we always have ord(γ)=q1k, and therefore, this expansion is valid unconditional on p.

In the first decade of the 20th century, Jackson [31, 32, 33] (also see [35, 34]) had defined the following specific divided difference operator on the polynomial ring,

𝖣γf(X):=f(γX)f(X)(γ1)X,and𝖣γt+1:=𝖣γ𝖣γtfor all t1.

In terms of this operator, the above expansion takes the form

f(X)=f(a)+𝖣γ(a)(Xa)+𝖣γ2(a)(Xa)(Xγa)[2]γ!++𝖣γk1(a)(Xa)(Xγk1a)[k1]γ!,

where [t]γ:=γt1γ1=1+γ+γ2++γt1, and [t]γ!:=[t]γ[t1]γ[1]γ. Returning to the code FRSs(a1,,an;k), it is then immediate by the definition of Dγ that there are invertible lower triangular matrices U(ai)𝔽qs×s,i[n] such that for any codeword [f]FRS, we have

[f(ai)𝖣γf(ai)𝖣γs1f(ai)]=U(ai)[f(ai)f(γai)f(γs1ai)]for all i[n].

In other words, after a specific change of basis (see, for instance, [42, 4] for a proof), the FRS code can be interpreted as a multiplicity code with respect to the derivative-like operator 𝖣γ.

Note that we have list decoding algorithms for FRS codes [28, 29], with the first one being algebraic and the second one being linear algebraic. If we are given the 𝖣γ-multiplicity encoding, we may just change the basis to the usual FRS encoding and run one of these two FRS decoders. However, what is really encouraging is that we can straightforwardly adapt the linear algebraic multiplicity decoder [29] to our 𝖣γ-multiplicity encoding, and the analysis will have a strong resemblance with that in the case of the classical derivatives. In this work, our main contribution is a construction that pushes this resemblance to the multivariate setting.

Our main algorithm will, in fact, work in the more general paradigm of list recovery, as was the case with the previous algorithms mentioned here.

Assumption on the field

Since our intention is to present the basic ideas as clearly and quickly as possible, we will stick to a simpler setting henceforth, where we assume that the evaluation points are contained in a finite field 𝔽q, but the polynomials are over a degree-3 field extension 𝕂=𝔽q3. (See Remark 4.) We can relax this setting to some extent by tinkering with the parameters involved, but we will not make this effort.

The 𝚀-derivative

We will now forego the use of the symbol γ for the multiplicative generator. Instead we will denote 𝚀𝕂× to be a multiplicative generator, and henceforth refer to the operator 𝖣𝚀 as the 𝚀-derivative. This is the more standard terminology in the literature where this operator has appeared before. The 𝚀-derivative 444We use the notation ‘𝚀’ in 𝚀-derivative in this work instead of the more popular ‘q’, since we use q to denote the field size. is also called the Jackson derivative, and is a special case of the Hahn derivative [30], as well as the Newton forward difference [36, Chapter 1, Section 9]. See [37, Chapter 26] for a discussion on a slightly more general quantum differential, as well as some symmetrized variants. These are all instances of a broader notion of divided difference in finite calculus [46, 10, 54, 64, 36, 59]. The 𝚀-derivative seems to have been used so far only over fields of characteristic zero, particularly in q-combinatorics [18, 3, 21, 60, 20] and quantum calculus [37, 17, 47]. Further, the 𝚀-derivative does not seem to have made any explicit appearance so far in the polynomial method literature. However, as we see in this work, this notion turns out to be useful over fields of positive characteristic, and indeed, as part of the polynomial method.

1.3 Multivariate 𝚀-derivatives, multivariate 𝚀-multiplicity code, and folded RM code

Our extension of 𝚀-derivatives from the univariate to multivariate setting is natural, and mimicks the extension of the classical derivatives. Assume indeterminates 𝕏=(X1,,Xm), and for any f(𝕏)𝕂[𝕏] and αm, we define the α-th 𝚀-derivative as the iterated operator

𝖣𝚀αf(𝕏):=𝖣𝚀,X1α1𝖣𝚀,Xmαmf(𝕏),

where 𝖣𝚀,Xi denotes the univariate 𝚀-derivative in the variable Xi. It is easy to see that the operator 𝖣𝚀α is well-defined and invariant under the order of iterations, just as in the case of the classical partial derivatives.

Now consider indeterminates 𝕐=(Y1,,Ym). For any t0, denote (XiYi)𝚀(t):=(XiYi)(Xi𝚀Yi)(Xi𝚀t1Yi), and further, for any αm, denote (𝕏𝕐)𝚀(α):=i=1m(XiYi)𝚀(αi). Also denote [α]𝚀!:=[α1]𝚀![αm]𝚀!, and 𝚀α𝕏=(𝚀α1X1,,𝚀αmXm). Then for any s1 and a(𝔽q×)m, there exists (see Proposition 9) an invertible matrix U(a)𝕂(m+s1s1)×(m+s1s1) such that

[𝖣𝚀γf(a)]|γ|<s=U(a)[f(𝚀γa)]|γ|<sfor all f(𝕏)𝕂[𝕏]. (1)

Fix any s1, and consider any nonempty A𝔽q×. For any k[s|A|], we define the m-variate multiplicity-s degree-k 𝚀-multiplicity code by

𝚀Multm,s(A;k)
={[𝖣𝚀f]:=[[𝖣𝚀γf(a)]|γ|<s]aAm:f(𝕏)𝕂[𝕏],deg(f)<k}(𝕂(m+s1s1))|A|m.

Note that the distance function is now the Hamming distance on the alphabet 𝕂(m+s1s1). Further, by the change of basis that we just saw above, we can define a natural multivariate analogue of the univariate FRS codes. For any k[s|A|], we define the s-folded degree k folded Reed-Muller (FRM) code by

FRMm,s(A;k)
={[𝚀f]:=[[f(𝚀γa)]|γ|<s]aAm:f(𝕏)𝕂[𝕏],deg(f)<k}(𝕂(m+s1s1))|A|m.

So by (1), we clearly have the change of basis

𝚀Multm,s(A;k)=diag(U(a):aAm)FRMm,s(A;k).

The rate and distance of our code constructions can be given as follows, which will be immediate from Corollary 11 and Theorem 13.

Proposition 1.

For any k[s|A|], the codes 𝚀Multm,s(A;k) and FRMm,s(A;k) have

rate equal to(m+k1k1)(m+s1s1)|A|m,and distance at least1k1s|A|.

1.4 Our result

Our main result is that over any field 𝔽q with sufficiently large q, and for sufficiently large constant multiplicity s, any constant rate multivariate 𝚀-multiplicity code 𝚀Multm,s(A;k), can be efficiently list decoded up to distance.

Theorem 2.

Consider any δ(0,1),ϵ(0,1δ). For any A𝔽q× and for the choices s=O(1/ϵ2m+1) and k=(1δ)s|A|, the code 𝚀Multsm(A;k) is efficiently list decodable up to radius δϵ with output list contained in a 𝕂-affine space of dimension atmost O(1/ϵ2m).

Our list decoding algorithm is conceptually simpler than that by [9] for the classical multivariate multiplicity codes. Their algorithm in the classical setting entailed recovering the close enough polynomial messages one homogeneous component at a time, while using the classical Euler’s formula for homogeneous polynomials (that relates a homogeneous polynomial to its first order derivatives) 555Euler’s formula states that for a homogeneous degree-d polynomial f(𝕏) over a field of characteristic greater than d, we have i=1mXif(𝕏)Xi=df(𝕏). to fully recover a homogeneous component (or a partial derivative of it) from its further first order partial derivatives. This formula, as well as the Taylor expansion for classical derivatives employed within the recovery step, both require the field characteristic to be large. In our 𝚀-setting, firstly there is no Euler’s formula involving 𝚀-derivatives, and interestingly we don’t need one! Secondly, the Taylor expansion for 𝚀-derivatives is clearly field characteristic insensitive. These are the only two places in the analysis where conceptually, the field characteristic was relevant in the classical setting, and is now irrelevant in the 𝚀-setting. Further, the short and clean analysis that we obtain encourages us to believe that this construction is the correct way to obtain a folded version of the RM code.

Let us give a quick outline of our algorithm. We make use of a clean gluing trick from [9] 666As mentioned in [9], this gluing trick seems to have first appeared in the construction of a hitting set generator in [25]. and then proceed to perform a simpler analysis that is extremely close to the univariate analysis of [29]. The informal takeaway of what we show is that

gluing trick+univariate list decoding analysis=multivariate list decoding analysis.

Suppose we are given the code 𝚀Multm,s(A;k) as in the statement of Theorem 2. Consider any received word

w=(wa:=[wa(γ)]|γ|<s:aAm)(𝕂(m+s1s1))|A|m.

Consider two fresh sets of indeterminates 𝕐=(Yγ)|γ|<s and 𝐙=(Z1,,Zm). For a suitably chosen parameter rs, we will find a nonzero interpolating polynomial with coefficients in 𝕂[𝐙], having the form

P(𝕏,𝕐,𝐙)=P~(𝕏)+j=0r1Pj(𝕏)(|γ|=jYγ𝐙γ),

such that the following hold.

  1. (Z1)

    The polynomials P~(𝕏),P0(𝕏),,Pr1(𝕏) must have suitably chosen 𝕏-degree bounds. We will also simultaneously ensure a good 𝐙-degree bound on the coefficients of these polynomials by performing Gaussian elimination over 𝕂[𝐙], as given in [38] or [9, Lemma 3].

  2. (Z2)

    For any f(𝕏)𝕂[𝕏], define a projection

    P[f](𝕏):=P~(𝕏)+j=0r1(|γ|=j𝖣𝚀γf(𝕏)𝐙γ).

    Further, for every αm,|α|<sr, define a 𝕂(𝐙)-linear operator Δ(α) (on an appropriate subspace of 𝕂(𝐙)[𝕏,𝕐]) that satisfies the condition (Δ(α)(P))[f](𝕏)=𝖣𝚀αP[f](𝕏). The polynomial P(𝕏,𝕐,𝐙) must now satisfy the vanishing conditions

    Δ(α)(P)(a,wa,𝐙)=0for all αm with |α|<sr, and aAm.

Note that the vanishing conditions in (Z2) define a linear system on the coefficients of P(𝕏,𝕐,𝐙), and we obtain the interpolating polynomial as a nontrivial solution of this linear system, by having large enough degree bounds as in (Z1). So for any close enough polynomial f(𝕏) (that is, having a large number of agreements with w), the conditions in (Z2) will imply that P[f](𝕏) vanishes at the agreement points with multiplicity at least sr (in the sense of multivariate 𝚀-derivatives, which we will define later), and this will imply that P[f](𝕏)=0 by a suitable version of Polynomial Identity Lemma for multivariate 𝚀-derivatives. We can then solve this equation to recover f(𝕏). Importantly, the list decoding radius is dependent on the choice of r. We can achieve list decoding up to distance, that is, the claim of Theorem 2 by choosing s=O(1/ϵ2m) and r=O(1/ϵm). In fact, as in the analyses of [29, 9], we will show that the collection of all close enough polynomials is contained in a 𝕂-affine subspace having dimension at most (m+r2r2).

This is a fairly simple strategy, very much within the algebraic algorithmic framework started employed in previous list decoding algorithms [65, 27, 28, 29, 43, 9]. Further, employing the gluing trick with the additional 𝐙-variables as in [9] allows us to formulate what we believe is the correct way to extend the linear algebraic decoding framework from the univariate to the multivariate setting, and our extension turns out to be simpler than the analysis in [9].

There is a subtlety in the recovery step in our multivariate setting vis-à-vis the univariate setting. When we solve the equation P[f](𝕏)=0 to recover a close enough polynomial f(𝕏), we will see that the coefficients of f(𝕏) will be captured within the coefficients of some linear relations between monomials in the 𝐙-variables. Using a large enough efficient hitting set (an interpolating set of affine points that is large enough to certify all polynomials with a degree bound) in the 𝐙-variables, we can then recover each of these coefficients. This is similar to what was done in [9] in an analogous situation, but our overall analysis is a bit different and simpler. We will outline this difference now.

Formally, for a vector space Vd𝕂[𝐙] of all polynomials having degree at most d, a hitting set is a finite set H𝕂m such that for any g(𝐙)Vd, if g(𝐙)0 then there exists uH such that g(u)0. For instance, by the Combinatorial Nullstellensatz [1, Theorem 1.1], every finite grid Sm with S𝕂,|S|>d is a hitting set for Vd. (We will precisely use this obvious hitting set in our analysis.) Our departure from the analysis of [9] is that they need to recover the coefficients of f(𝕏) one homogeneous component at a time, by employing the Euler’s formula. In our analysis, we will directly recover one coefficient at a time (with respect to a suitable choice of basis), and this is extremely similar to the corresponding step in the linear algebraic list decoding algorithm [29] for classical univariate multiplicity codes. Recall that we intend to obtain an affine subspace of small dimension that contains all close enough polynomials. Assume the form f(𝕏)=|α|<kfα𝕏α for all target polynomials in the affine subspace. We will recover the target polynomials inductively from lower coefficients to higher coefficients. In order to kick-start the recovery procedure, as a base case of the induction, we set the coefficients (fγ:|γ|r2) arbitrarily. Then for any αm,|α|kr, the equation P[f](𝕏)=0 would imply an equation that has the form

𝕂[𝐙]-linear combination(fα+γ:|γ|=r1)
=𝕂[𝐙]-linear combination(fθ+η:(|η|,θ)(r1,α),(|η|,θ)(r1,α)),

where the polynomials comprising the above 𝕂[𝐙]-linear combinations all have a good 𝐙-degree bound due to the 𝐙-degree bound on P(𝕏,𝕐,𝐙). Note that each of the coefficients fθ+η in the R.H.S. above is strictly below (in the usual componentwise partial order) some coefficient fα+γ in the L.H.S. above. Since by induction hypothesis, we have already recovered all the coefficients in the R.H.S, we have the 𝕂[𝐙]-linear combination comprising the L.H.S. above also determined. We then instantiate a hitting set in the 𝐙-variables to recover the individual coefficients in the L.H.S. above, in much the same way as [9] instantiate for their linear combinations.

Thus, the coefficients (fγ:|γ|r2) are the only possible free coefficients in the above procedure, and therefore, the output list is contained in an affine 𝕂-subspace having dimension at most (m+r2r2). We can further adapt the off-the-shelf pruning algorithm by [44] and its improved analysis by [66] to the multivariate setting, nearly identically as in [9], to finally end up with a randomized algorithm that guarantees constant output list size. Very recently, two concurrent works [63, 12] improved the output list size guarantee for list decoding FRS codes and univariate multiplicity codes from exponential in 1/ϵ to O(1/ϵ2) and the optimal O(1/ϵ) respectively. It is possible to extend their argument in the obvious way in the multivariate setting to give an even smaller output list size guarantee in the case of list decoding multivariate 𝚀-multiplicity codes. We do not mention the details in this presentation.

We will, in fact, prove a more general list recovery result as in Theorem 19, but stick to only showing polynomial output list size guarantee, for simplicity.

1.5 Further questions

Multivariate multiplicity codes evaluated on the vector space 𝔽qm are known to have good locality properties [45], especially when the field characteristic is larger than the degree [43, 44]. It will be interesting to see if similar locality properties hold for the multivariate 𝚀-multiplicity code in a field characteristic insensitive sense.

2 Preliminaries

List decoding and list recovery

Let Σ be a finite alphabet. An input list is any tuple S=(S1,,Sn) of subsets SiΣ. Further, for any aΣn, the (relative) Hamming distance between a and S is defined by

𝖽(a,S)=1n|{i[n]:aiSi}|.

We say an input list S=(S1,,Sn) is -sized if |Si| for all i[n].

Let 𝔽 be a field, and consider an 𝔽-linear code C, that is, Σ is an 𝔽-linear space.777The more conventional definition defines C to be a linear code if C is linear over Σ=𝔽, but our definition is more relaxed where we allow Σ to be an 𝔽-linear space. Our relaxed definition holds for our codes of interest, whereas the conventional definition does not hold. Indeed, we heavily use the relaxed linearity properties of our code constructions, often implicitly. For ρ[0,1] and ,L+, we say C is (ρ,,L)-list recoverable (LR) if there does not exist any -sized sequence of input lists S that admits L+1 distinct codewords c(1),,c(L+1)C such that

𝖽(c(j),S)ρ,for all j[L+1].

The case =1 corresponds to list decoding with identical definitions.

Field extensions

Fix a finite field 𝔽q, and a finite degree extension 𝕂/𝔽q having extension degree κ, that is, 𝕂=𝔽qκ. Also denote [κ]q=1+q++qκ1. Fix a multiplicative generator 𝚀𝕂×. We immediately note the following basic observations (see, for instance, [48, Chapter 2]), and also add quick proofs for completeness.

Proposition 3.
  1. (a)

    [48, Theorem 2.10]𝕂=𝔽q(𝚀).

  2. (b)

    [48, Lemma 2.3]𝚀t𝔽q, for all t[[κ]q1].

Proof.
  1. (a)

    Since 𝔽q𝕂 and 𝚀𝕂, we have 𝔽q(𝚀)𝕂. Since 𝕂×={𝚀t:t0}, we have 𝕂𝔽q(𝚀).

  2. (b)

    Since 𝚀 is the multiplicative generator of 𝕂×, and 𝕂=𝔽qκ, the multiplicative order of 𝚀 is qκ1. Obviously, 𝚀t0 for all t1. Now suppose 𝚀t𝔽q× for some t1. Then 𝚀t(q1)=1, which means qκ1t(q1), that is, t(qκ1)/(q1)=1+q++qκ1=[κ]q.

 Remark 4.

Note that in this work, we will assume that the field size q. We will also consider an additional parameter s1, and always work in the case where 𝚀,,𝚀s1𝔽q. By Proposition 3(b), this is true if s1<[κ]q=1+q+q2++qκ1. Further, we will have the degree d of all polynomials that we consider to satisfy dsq1. Therefore, we will also need sq1<[κ]q=1+q+q2++qκ1. Both these requirements are satisfied if we take κ=3 and sq, and we assume these throughout the rest of this work. In fact, we will only take s to be a constant relative to q. So henceforth, we have 𝕂=𝔽q3, and 𝚀 is a multiplicative generator of (𝔽q3)×.

For any nk0, denote

[n]𝚀=t=0n1𝚀t=𝚀n1𝚀1,[n]𝚀!=t=1n[t]𝚀,and[nk]𝚀=[n]𝚀![k]𝚀![nk]𝚀!.

Note that [n]𝚀0 for all n[qκ2], and [0]𝚀!=1. The quantity [nk]𝚀 is called the Gaussian binomial coefficient [22]. We will also have the convention that [n]𝚀!=0 if n<0, and [nk]𝚀=0 if k>n.

2.1 Univariate 𝚀-derivatives

For any f(X)𝕂[X], the 𝚀-derivative [31, 32, 33, 34, 35] is defined by

𝖣𝚀f(X)=f(𝚀X)f(X)(𝚀1)X.

Further, we are interested in iterated applications of the operator 𝖣𝚀, and so we denote 𝖣𝚀0f(X)=f(X), and 𝖣𝚀t+1f(X)=𝖣𝚀(𝖣𝚀tf)(X) for all t0.

 Remark 5.

We have 𝖣𝚀t(Xk)=[k]𝚀[k1]𝚀[kt+1]𝚀Xkt+1 for all k,t0. Consider any nonconstant f(𝕏)𝕂[𝕏] and t[0,deg(f)]. Then we immediately get deg(𝖣𝚀tf)deg(f)t by linearity of 𝖣𝚀t (see Proposition 6(a) below). More importantly, as a departure from classical derivatives, we get deg(𝖣𝚀tf)=deg(f)t if deg(f)[κ]q1.

Let us quickly collect a few basic properties of 𝚀-derivatives. For an indeterminate Y and k0, denote (XY)𝚀(k)=t=0k1(X𝚀tY). For any f(X)𝕂[X] and a𝕂×, denote (fa)(X)=f(aX).

Proposition 6 ([37, 17]).
  1. (a)

    Linearity.𝖣𝚀k is an 𝕂-linear map on 𝕂[X], for all k0.

  2. (b)

    Scaling. For any f(X)𝕂[X] and a𝕂×,

    𝖣𝚀k(fa)(X)=ak𝖣𝚀kf(aX)for all k0.
  3. (c)

    Taylor expansion. For any f(X)𝕂[X],

    f(X)=k0𝖣𝚀kf(Y)[k]𝚀!(XY)𝚀(k).

    In particular,

    • For any a𝕂, we have f(X)=k0𝖣𝚀kf(a)[k]𝚀!(Xa)𝚀(k).

    • For any k0, we have 𝖣𝚀kf(0)=[k]𝚀!𝖧(k)f(0), where 𝖧(k)f(0) is the k-th Hasse derivative888For f(X)𝕂[X], the Hasse derivatives 𝖧(k)f(Y),k0 satisfy: f(X)=k0𝖧(k)f(Y)(XY)k. of f(X) at 0.

  4. (d)

    Product rule. For any f(X),g(X)𝕂[X] and k0,

    𝖣𝚀k(fg)(X)=t=0k[kt]𝚀𝖣𝚀ktf(𝚀tX)𝖣𝚀tg(X)=t=0k[kt]𝚀𝖣𝚀ktf(X)𝖣𝚀tg(𝚀ktX).

Change of basis

A simple change of basis shows that 𝚀-derivatives are built out of simple evaluations at correlated points. Define an infinite matrix ν(X)(𝕂(X))× by

νk,t(X)=(1)t𝚀(t2)(k1)t(𝚀1)kXk(kt),for all k,t0.

Clearly, νk,t(X)=0 whenever k<t, that is, ν lower triangular. Further, we have the diagonal entries νk,k(X)=(1)k/(𝚀(k2)(𝚀1)kXk)0 for all k0. So ν is invertible, and ξ(X):=ν(X)1 is lower triangular, given by

ξk,t(X)={(1)k𝚀(k2)(𝚀1)kXkif k=t0,(1)t(𝚀1)tXt𝚀(k2)+(k1)t(kt)(1(11𝚀kt1)kt)if k>t0.

This can be proven by instantiating, for instance, [62, Theorem 1.2.3].

Proposition 7 (Change of basis for 𝚀-derivatives).

For any f(X)𝕂[X] and k0, we have

𝖣𝚀kf(X)=t=0kνk,t(X)f(𝚀tX)[42, 4],andf(𝚀kX)=t=0kξk,t(X)𝖣𝚀tf(X).
Proof.

For any f(X)𝕂[X], denote

(f𝚀)=[f(𝚀kX)]k(𝕂[X])×1and(𝖣𝚀f)=[𝖣𝚀kf(X)]k(𝕂[X])×1.

We have (𝖣𝚀f)=ν(f𝚀) by [42, 4]. Further, since ξ=ν1, this implies (f𝚀)=ξ(𝖣𝚀f).

2.2 Multivariate 𝚀-derivatives

Now assume indeterminates 𝕏=(X1,,Xm), and for any subset I[m], denote 𝕏I=(Xi:iI). For every i[m], denote the 𝕂-linear 𝚀-derivative map on 𝕂[Xi] by 𝖣𝚀,Xi, and note that it extends in an obvious (and unique) way to a 𝕂(𝕏[m]{i})-linear map on 𝕂[𝕏]. It is also immediate that 𝖣𝚀,X1,,𝖣𝚀,Xm commute as 𝕂-linear maps on 𝕂[𝕏]. Then for any f(𝕏)𝕂[𝕏] and αm, we define the α-th 𝚀-derivative by

𝖣𝚀αf(𝕏)=𝖣𝚀,X1α1𝖣𝚀,Xmαmf(𝕏).

For α,βm,βα, denote

[α]𝚀=i=1m[αi]𝚀,[α]𝚀!=i=1m[αi]𝚀!,and[αβ]𝚀=[α]𝚀![β]𝚀![αβ]𝚀!=i=1m[αiβi]𝚀.

Also, note the usual binomial coefficient (αβ):=i=1m(αiβi). For indeterminates 𝕐=(Y1,,Ym) and αm, denote (𝕏𝕐)𝚀(α)=i=1m(XiYi)𝚀(αi). For any f(𝕏)𝕂[𝕏] and a𝕂×, denote a𝕏=(a1X1,,amXm), and (fa)(𝕏)=f(a𝕏). Also denote aγ=a1γ1amγm. For any γm, denote 𝚀γ𝕐=(𝚀γ1Y1,,𝚀γmYm).

We easily get the following basic properties of multivariate 𝚀-derivatives.

Proposition 8.
  1. (a)

    Linearity.𝖣𝚀α is a 𝕂-linear map on 𝕂[𝕏], for all αm.

  2. (b)

    Scaling. For any f(𝕏)𝕂[𝕏] and a𝕂×, we have

    𝖣𝚀β(fa)(𝕏)=aβ𝖣𝚀βf(a𝕏)for all βm.
  3. (c)

    Taylor expansion. For any f(𝕏)𝕂[𝕏],

    f(𝕏)=αm𝖣𝚀αf(𝕐)[α]𝚀!(𝕏𝕐)𝚀(α).

    In particular,

    • For any a𝕂m, we have f(X)=αm𝖣𝚀αf(a)[k]𝚀!(𝕏a)𝚀(α).

    • For any αm, we have 𝖣𝚀αf(0m)=[α]𝚀!𝖧(α)f(0m), where 𝖧(α)f(0m) is the α-th Hasse derivative999For f(𝕏)𝕂[𝕏], the Hasse derivatives 𝖧(α)f(𝕐),αm satisfy: f(𝕏)=αm𝖧(α)f(𝕐)(𝕏𝕐)α. of f(𝕏) at 0m.

  4. (d)

    Product rule. For any f(𝕏),g(𝕏)𝕂[𝕏] and αm,

    𝖣𝚀α(fg)(𝕏)=βα[αβ]𝚀𝖣𝚀αβf(𝚀β𝕏)𝖣𝚀βg(𝕏)=βα[αβ]𝚀𝖣𝚀αβf(𝕏)𝖣𝚀βg(𝚀αβ𝕏).
Proof.

Each item follows easily by induction on m, with the base cases for Item (a), (b), (c), and (d) being Proposition 6(a), (b), (c), and (d) respectively.

Change of basis

Using the infinite matrices defined in Section 2.1, define two infinite matrices ν(m)(𝕏),ξ(m)(𝕏):(𝕂(𝕏))m×m by

να,β(m)(𝕏)=i=1mναi,βi(Xi),ξα,β(m)(𝕏)=i=1mξαi,βi(Xi),for all α,βm.

It is then easy to see (for instance, by induction on m) that ν(m)(𝕏) is invertible and ξ(m)(𝕏)=(ν(m)(𝕏))1. Also, να,β(m)(𝕏)=ξα,β(m)(𝕏)=0 for all α,βm,αβ, and να,α(m)(𝕏)0ξα,α(m)(𝕏) as well as να,α(m)(𝕏)=1/ξα,α(m)(𝕏) for all αm. Further, we obtain the following change of basis, again by induction on m.

Proposition 9 (Change of basis for multivariate 𝚀-derivatives).

For any f(𝕏)𝕂[𝕏] and αm, we have

𝖣𝚀αf(𝕏)=βανα,β(m)(𝕏)f(𝚀β𝕏),andf(𝚀α𝕏)=βαξα,β(m)(𝕏)𝖣𝚀βf(𝕏).

A monomial basis for function spaces

We assume familiarity with Gröbner basis theory. For definitions and more details, see [13, Chapters 2–5]. For any A𝔽q×, consider the ideal defined (by appealing to the change of basis in Proposition 9) as

𝒟sm(A) ={f(𝕏)𝕂[𝕏]:f(𝚀γa)=0 for all aA and γm,|γ|<s}
={f(𝕏)𝕂[𝕏]:𝖣𝚀γf(a)=0 for all aA and γm,|γ|<s}.

The following is a characterization of a Gröbner basis of 𝒟sm(A) that is universal (invariant under choice of monomial order) and reduced (minimal with respect to divisibility of leading terms).101010Given any monomial order, a universal Gröbner basis and a reduced Gröbner basis for an ideal both always exist; however, a Gröbner basis that is both universal and reduced may not always exist. A universal reduced Gröbner basis, whenever it exists, is the closest to being the Gröbner basis for an ideal, and allows performing Euclidean division carelessly. This is an extension of [1, Theorem 1.1] and [6, Theorem 4.1], while being a special case of [23, Theorem 4.7].111111A related characterization of standard monomials, which is a strictly weaker notion than Gröbner basis, was obtained much earlier by [19, 50] via determination of winning strategies of a lex game, and a further reinterpretation of these winning strategies using the notion of compression for set systems. This reinterpretation was also rediscovered independently by [51]. A similar characterization in the case of classical multivariate derivatives was given in [43, Section 6], and can also be obtained by [23, Theorem 4.7].

Theorem 10 ([23]).

For any A𝔽q×, the set of polynomials

𝒢sm(A):={i=1mt=0γi1(aiA(Xi𝚀tai)):γm,|γ|=s}

is a universal reduced Gröbner basis for 𝒟sm(A).

Further, for any A𝔽q×, define the function spaces

Vm,s(A) ={[f]:=[[f]a:=[f(𝚀γa)]|γ|<s]aAm:f(𝕏)𝕂[𝕏]},
and𝖣𝚀Vm,s(A) ={[𝖣𝚀f]:=[[𝖣𝚀f]a:=[𝖣𝚀γf(a)]|γ|<s]aAm:f(𝕏)𝕂[𝕏]}.

As an immediate corollary of Theorem 10 and the simple properties of Euclidean division for polynomials, we can describe a monomial basis (a basis where each vector is the appropriate evaluation of a monomial) for the above vector spaces.

Corollary 11.

The collections of evaluation vectors

{[𝕏α]:αs1m(|A|)}and{[𝖣𝚀𝕏α]:αs1m(|A|)}

are 𝕂-linear bases of Vm,s(A) and 𝖣𝚀Vm,s(A) respectively, where

s1m(|A|)={αm:α1|A|++αm|A|s1}.

2.3 Counting 𝚀-roots of polynomials with multiplicities

The discussion so far naturally leads us to the question of bounding the number of 𝚀-roots of a polynomial with multiplicities.

For any nonzero univariate f(X)𝕂[X] and a𝕂, define the 𝚀-multiplicity of f(X) at a by μ𝚀(f,a)=min{k0:𝖣𝚀kf(a)0}. The following claim is easy, by appealing to univariate factorization.

Proposition 12 (Easy).

For any nonempty set A𝔽q×, and nonzero f(X)𝕂[X] with deg(f)d<[3]q, we have aAμ𝚀(f,a)d. In particular, for any s1, we have |{aA:μ𝚀(f,a)s}|d/s.

In the multivariate setting, for any nonzero f(𝕏)𝕂[𝕏] and a𝕂m, define the 𝚀-multiplicity of f(𝕏) at a by

μ𝚀(f,a)=min{k0:there exists γm,|γ|=k such that 𝖣𝚀γf(a)0}.

In this section, we will prove the following multivariate extension of Proposition 12, which is analogous to [15, Lemma 2.7] in the setting of classical derivatives.

Theorem 13.

For any nonempty set A𝔽q×, and any nonzero f(𝕏)𝕂[𝕏] with deg(f)d<[3]q, we have

aAmμ𝚀(f,a)d|A|m1.

In particular, for any s1, we have |{aAm:μ𝚀(f,a)s}|d|A|m1/s.

Towards a proof of Theorem 13, we first consider some simple consequences of the definition of multivariate 𝚀-multiplicity.

Lemma 14.

Consider any nonzero f(𝕏)𝕂[𝕏].

  1. (a)

    For any a𝕂m, if μ𝚀(f,a)k, then

    μ𝚀(𝖣𝚀γf,a)k|γ|for all γm,|γ|k.
  2. (b)

    For any (a1,,am)𝕂m and γm, if 𝖣𝚀γf(a1,,am1,Xm)0, then

    μ𝚀(𝖣𝚀γf,(a1,,am1,am))μ𝚀(𝖣𝚀γf(a1,,am1,Xm),am).
Proof.
  1. (a)

    Since μ𝚀(f,a)k, we have

    𝖣𝚀γf(a)=0for all γm,|γ|k1.

    Consider any γ,βm,|γ|k,|β|k|γ|1. So |β+γ|k1, and therefore 𝖣𝚀β(𝖣𝚀γf)(a)=𝖣𝚀β+γf(a)=0. This means μ𝚀(𝖣𝚀γf,a)k|γ|.

  2. (b)

    Suppose μ𝚀(𝖣𝚀γf,(a1,,am1,am))=k. This means

    𝖣𝚀β+γf(a1,,am1,am)=0for all βm,|β|k1.

    In particular, restricting our attention to βm of the form β=(0m1,t), we get

    𝖣𝚀,Xmt(𝖣𝚀γf)(a1,,am1,am)=0for all tk1.

    This means μ𝚀(𝖣𝚀γf(a1,,am1,Xm),am)k.

We can now prove Theorem 13.

Proof of Theorem 13.

We will prove by induction on m. The base case m=1 is true by Proposition 12. Now suppose the assertion is true for some m1, and now consider indeterminates 𝕏=(X1,,Xm,Xm+1). Without loss of generality, assume deg(f)=d, and write

f(X1,,Xm,Xm+1)=t=0ft(X1,,Xm)Xm+1t,where f(X1,,Xm)0,deg(f)=d.

For any a1,,amA, denote sa1,,am=μ𝚀(f,(a1,,am)). So by induction hypothesis, we have

(a1,,am)Amsa1,,am(d)|A|m1. (2)

Now fix a1,,amA, and let γ=(γ1,,γm)m such that |γ|=sa1,,am and
𝖣𝚀γf(a1,,am)0. This means

  • 𝖣𝚀γf(X1,,Xm)0, and so

    𝖣𝚀(γ,0)f(X1,,Xm,Xm+1)=t=0𝖣𝚀γft(X1,,Xm)Xm+1t0.
  • g(γ)(Xm+1):=𝖣𝚀(γ,0)f(a1,,am,Xm+1)0, and deg(g(γ))=.

Then, for any am+1A, we get

μ𝚀(f,(a1,,am,am+1)) |(γ,0)|+μ𝚀(𝖣𝚀(γ,0)f,(a1,,am,am+1)) by Lemma 14(a)
sa1,,am+μ𝚀(g(γ),am+1) by Lemma 14(b)

So by Proposition 12,

am+1Aμ𝚀(f,(a1,,am,am+1))sa1,,am|A|+.

Then (2) implies

(a1,,am,am+1)Am+1μ𝚀(f,(a1,,am,am+1))(d)|A|m+|A|m=d|A|m,

which completes the induction.

3 List decoding/recovery of 𝚀-multiplicity codes

We will now move to our main result on list decoding multivariate 𝚀-multiplicity codes. In fact, we will present the algorithm for the more general list recovery paradigm.

3.1 List decoding/recovery of univariate 𝚀-multiplicity codes

Let us see that the classical univariate multiplicity code list decoder [29] can be suitably adapted to list decode 𝚀Mults(A;k), which therefore provides an alternative algorithm to list decode FRS codes. We will, in fact, consider list recovery.

Theorem 15.

Consider any R(0,1),ϵ(0,1R). For any A𝔽q×,|A|=n and 1, and for the choices s=/ϵ2 and k=Rsn, the code 𝚀Mults(A;k) is efficiently list recoverable up to radius 1Rϵ with output list size at most (/ϵ)O((1+log)/ϵ).

Consider additional indeterminates 𝕐=(Y0,,Ys1), and the 𝕂-linear subspace of the polynomial ring 𝕂[X,𝕐] defined by

𝖫s(X,𝕐)={P~(X)+P0(X)Y0++Ps1(X)Ys1:P~(X),P0(X),,Ps1(X)𝕂[X]}.

Also, denote Ys=0. Define a 𝕂-linear operator Δ:𝖫s(X,𝕐)𝖫s(X,𝕐) by

Δ(P~(X)+j=0s1Pj(X)Yj)=𝖣𝚀P~(X)+j=0s1(𝖣𝚀Pj(X)Yj+Pj(𝚀X)Yj+1).

We will be interested in iterated applications of Δ, and therefore, denote Δ0=Id and Δr+1=ΔΔr for all r0. For any P(X,𝕐)𝖫s(X,𝕐) and f(X)𝕂[X], denote
P[f](X)=P(X,f(X),𝖣𝚀f(X),,𝖣𝚀s1f(X)). The following is immediate by the definition of Δ.

Lemma 16.

Let P(X,𝕐)𝖫s(X,𝕐).

  1. (a)

    If j[0,s1] such that degYj(P)=0 for all jj, then degYj(ΔP)=0 for all jj+1.

  2. (b)

    (ΔP)[f](X)=𝖣𝚀(P[f])(X), for all f(X)𝕂[X].

Proof.

Let P(X,𝕐)=P~(X)+P0(X)Y0++Ps1(X)Ys1𝖫s(X,𝕐).

  1. (a)

    If j[0,s1] such that degYj(P)=0 for all jj, this means Pj(X)==Ps1(X)=0. Now we have

    (ΔP)(X,𝕐) =𝖣𝚀P~(X)+j=0s1(𝖣𝚀Pj(X)Yj+Pj(𝚀X)Yj+1)
    =𝖣𝚀P~(X)+j=0s1(Pj1(𝚀X)+𝖣𝚀Pj(X))Yj.

    So, Pj1(𝚀X)+𝖣𝚀Pj(X)=0 for all jj+1. This means degYj(ΔP)=0 for all jj+1.

  2. (b)

    We have

    (ΔP)[f](X) =𝖣𝚀P~(X)+j=0s1(𝖣𝚀Pj(X)𝖣𝚀jf(X)+Pj(𝚀X)𝖣𝚀j+1f(X))
    =𝖣𝚀P~(X)+j=0s1𝖣𝚀(Pj𝖣𝚀jf)(X) by Proposition 6(c)
    =𝖣𝚀P[f](X).  

It is worth noting that a close variant of the operator Δ appears in [24, Definition 8.1] in the context of nearly linear-time list decoding of FRS codes, without the interpretation in terms of 𝚀-derivatives. So it is reasonable to surmise that a nearly linear-time implementation of the list decoding algorithm à la [24] is possible for the univariate 𝚀-multiplicity codes. In order to keep our main presentation short, we limit ourselves here to adapting the more conventional polynomial-time algorithm of [29].

The interpolation step of the list recovery algorithm, which will be used multiple times in this discussion, can be captured as follows.

Proposition 17.

Consider any A={α1,,αn}𝔽q×, parameters sr1, and k[s|A|]. Let 1, and define

d=n(sr+1)(r+k)+1r+1.

Then for any S1,,Sn𝕂s with |Si|,i[n], there exists a nonzero polynomial P(X,𝕐)=P~(X)+j=0r1Pj(X)Yj𝖫s(X,𝕐) with

deg(P~)d+k1,anddeg(Pj)dfor all j[0,r1],

such that for any f(X)𝕂[X],deg(f)<k,

if𝖽A(s)(f,S)r+1+k(sr+1)nthenP(X,f(X),𝖣𝚀f(X),,𝖣𝚀s1f(X))=0.
Proof.

Assume the notation u=(u(0),,u(s1))𝕂s. We can, in fact, add additional arbitrary elements and assume |Si|=, for all i[n]. Also enumerate Si={wi,1,,wi,},i[n]. We will construct a nonzero polynomial P(X,𝕐)=P~(X)+j=0s1Pj(X)Yj𝖫s(X,𝕐) such that

  1. (a)

    P(X,𝕐)=P~(X)+j=0r1Pj(X)Yj𝖫s(X,𝕐), that is, Pr(X)==Ps1(X)=0.

  2. (b)

    deg(P~)d+k1, and deg(Pj)d for all j[0,r1].

  3. (c)

    (ΔjP)(αi,wi,t(0),,wi,t(s1))=0 for all j[0,sr],t[],i[n].

The number of linear constraints is n(sr+1), and the number of coefficients is d+k+r(d+1)=d(r+1)+(r+k). So for the choice

d=n(sr+1)(r+k)+1r+1,

we indeed have d(r+1)+(r+k)>n(sr+1), and this ensures a nontrivial solution, that is, P(X,𝕐)0.

Now consider any f(X)𝕂[X], and suppose 𝖽A(s)(f,S)=1(ν/n). By Lemma 16, this implies aAμ𝚀(P[f],a)νsr+1. But we also have deg(P[f])d+k1. Therefore, by Proposition 12, we can conclude that P[f](X)=0 if ν>(d+k1)/(sr+1). This is equivalent to

ν>n(sr+1)(r+k)+1r+1+k1sr+1=n(sr+1)(r+k)+(r+1)(k1)+1(r+1)(sr+1).

The above is true if

νnr+1+ksr+1,that is,νnr+1+k(sr+1)n.

Once we have an interpolating polynomial that captures the correct codewords, we can extract the output list as follows.

Proposition 18.

Consider parameters sr1, and k,d1 such that d+k1<[3]q. If P(X,𝕐)=P~(X)+j=0r1Pj(X)Yj𝖫s(X,𝕐) is a nonzero polynomial with

deg(P~)d+k1,anddeg(Pj)dfor all j[0,r1],

then the solution space

{f(X)𝕂[X]:P[f](X)=0}

is an affine 𝕂-linear space with dimension at most r1.

Proof.

Consider any f(X)𝕂[X] satisfying P[f](X)=0. Note that we have P(X,𝕐)0. If Pj(X)=0 for all j[0,r1], then P[f](X)=P~(X)=0, which means P(X,𝕐)=0, a contradiction. So there exists j[0,r1] such that Pj(X)0. Without loss of generality, we assume Pr1(X)0. (Otherwise, we work with the largest r such that Pr1(X)0.)

Let us also quickly consider another notation. For any 𝚀b𝕂×,h0, and g(X)𝕂[X], denote

gh,b:=coeff((X𝚀b)𝚀(h)[h]𝚀!,g)=𝖣𝚀hg(𝚀b) by Proposition 6(b)

Recall that 𝕂=𝔽q3. Since deg(Pr1)d<[3]q, there exists 𝚀b𝕂× such that Pr1(𝚀h+b)0 for all h[0,d+k1]. Further, since deg(P)d+k1<[3]q, we will work with the basis of monomials {(X𝚀b)𝚀(h):h[0,d+k1]}. Now we have

P[f](X)P~(X)+P0(X)f(X)+P1(X)𝖣𝚀f(X)++Pr1(X)𝖣𝚀r1f(X)=0

So by Proposition 6(c), for every h[0,d+k1], we have

0=Ph,b[f] =(P~)h,b+j=0r1c=0h[hc]𝚀(Pj)hc,c+b(𝖣𝚀jf)c,b
=(P~)h,b+j=0r1c=0h[hc]𝚀(Pj)hc,c+bfc+j,b.

Now for every h[0,d+k1], since (Pr1)0,h+b=Pr1(𝚀h+b)0, we can rewrite the above

fh+r1,b=1(Pr1)0,h+b((P~)h,b+(j,c)(r1,h)(j,c)(r1,h)[hc]𝚀(Pj)hc,c+bfc+j,b).

So we can set the undetermined coefficients among f0,b,,fr2,b𝕂 freely, and the remaining coefficients of f(X) are uniquely determined the above equation. This means the solution space is an affine 𝕂-linear space having dimension at most r1.

The final list recovery result can now be proven.

Proof of Theorem 15.

Choosing r=/ϵ, and plugging in all the parameters into Proposition 17 and Proposition 18 immediately implies that the output list is contained in an affine 𝕂-linear space with dimension at most /ϵ1. The pruning algorithm with the improved analysis [44, 66] then asserts that the output list size is at most (/ϵ)O((1+log)/ϵ).

3.2 List decoding/recovery of multivariate 𝚀-multiplicity codes

We will now see an algorithm to list decode 𝚀Multm,s(A;k). Even though it is inspired by the classical multivariate multiplicity code list decoder [9], it turns out that we can give a simpler algorithm and analysis that is perhaps the correct multivariate extension of our list decoder in the univariate case. We will, in fact, perform list recovery, and the main theorem statement is the following.

Theorem 19.

Consider any δ(0,1),ϵ(0,1δ). For any A𝔽q× and 1, and for the choices s=O(/ϵ2m+1) and k=(1δ)s|A|, the code 𝚀Multsm(A;k) is efficiently list recoverable up to radius δϵ with output list contained in a 𝕂-affine space of dimension atmost O(m/ϵ2m).

Note that setting =1 in Theorem 19 gives Theorem 2.

Consider additional indeterminates 𝕐=(Yγ)|γ|<s,𝐙=(Z1,,Zm), and the 𝕂-linear subspace of the polynomial ring 𝕂[𝕏,𝕐] defined by

𝖫s(𝕏,𝕐)={P~(𝕏)+j=0s1Pj(𝕏)(|γ|=jYγ𝐙γ)𝕂(𝐙)[𝕏,𝕐]}.

Also, denote Yγ=0 for all γm,|γ|s. For any αm, define a 𝕂-linear operator Δ(α):𝖫s(𝕏,𝕐)𝖫s(𝕏,𝕐) by

Δ(α)(P~(𝕏)+j=0s1Pj(𝕏)(|γ|=jYγ𝐙γ))=𝖣𝚀αP~(𝕏)+j=0s1|γ|=j(βα[αβ]𝚀𝖣𝚀αβPj(𝚀β𝕏)Yβ+γ)𝐙γ.

For any P(𝕏,𝕐,𝐙)𝖫s(𝕏,𝕐) and f(𝕏)𝕂[𝕏], denote

P[f](𝕏)=P(𝕏,(𝖣𝚀γf(𝕏))|γ|<s,𝐙)𝕂(𝐙)[𝕏].

The following is then immediate.

Lemma 20.

Let P(𝕏,𝕐,𝐙)𝖫s(𝕏,𝕐).

  1. (a)

    If j[0,s1] such that degYγ(P)=0 for all γm,|γ|j, then degYγ(Δ(α)P)=0 for all γm,|γ|j+|α|.

  2. (b)

    (Δ(α)P)[f](𝕏)=𝖣𝚀α(P[f])(𝕏), for all f(𝕏)𝕂[𝕏].

 Remark 21.

A consequence of Lemma 20 is that Δ(α+β)=Δ(α)Δ(β) for all α,βm. We do not explicitly use this elsewhere.

Proof of Lemma 20.

Let P(𝕏,𝕐,𝐙)=P~(𝕏)+j=0s1Pj(𝕏)(|γ|=jYγ𝐙γ)𝖫s(𝕏,𝕐).

  1. (a)

    If j[0,s1] such that degYγ(P)=0 for all γm,|γ|j, this means Pj(𝕏)=0 for all jj. Now we have

    (Δ(α)P)(𝕏,𝕐,𝐙) =𝖣𝚀αP~(𝕏)+j=0s1|γ|=j(βα[αβ]𝚀𝖣𝚀αβPj(𝚀β𝕏)Yβ+γ)𝐙γ
    =𝖣𝚀αP~(𝕏)+j=0s1|γ|=j(βα[αβ]𝚀𝖣𝚀αβPj|β|(𝚀β𝕏)𝐙γβ)Yγ

    So, βα𝖣𝚀αβPj|β|(𝚀β𝕏)𝐙γβ=0 for all jj+|α|. This means degYj(Δ(α)P)=0 for all jj+|α|.

  2. (b)

    We have

    (Δ(α)P)[f](𝕏) =𝖣𝚀αP~(𝕏)+j=0s1|γ|=j(βα[αβ]𝚀𝖣𝚀αβPj(𝚀β𝕏)𝖣𝚀β+γf(𝕏))𝐙γ
    =𝖣𝚀αP~(𝕏)+j=0s1|γ|=j𝖣𝚀α(Pj𝖣𝚀γf)(𝕏)𝐙γ by Proposition 8(d)
    =𝖣𝚀α(P[f])(𝕏).  

The interpolation step of the list recovery algorithm, which will be used multiple times in this discussion, can be captured as follows.

Proposition 22.

Consider any A𝔽q×, parameters sr1, and k[s|A|]. Let 1, and define

d=5(r+1)1/m(sr+1)|A|.

Then for any Sa𝕂(m+s1s1) with |Sa|,aAm, there exists a nonzero polynomial P(𝕏,𝕐,𝐙)=P~(𝕏)+j=0r1Pj(𝕏)|γ|=jYγ𝐙γ𝖫s(𝕏,𝕐) with

deg(P~)d+k1,anddeg(Pj)dfor all j[0,r1].

such that for any f(𝕏)𝕂[𝕏],deg(f)<k,

if𝖽A(m,s)(f,S)5(r+1)1/m+k(sr+1)|A|m1.thenP[f](𝕏)=0.

Moreover, we can ensure that P(𝕏,𝕐,𝐙) is a polynomial in 𝐙 with deg𝐙(P)<ms.

Proof.

Assume the notation u=(u(γ):|γ|<s)𝕂(m+s1s1). We can, in fact, add additional arbitrary elements and assume |Sa|=, for all aAm. Also enumerate Sa={wa,1,,wa,},aAm. We will construct a nonzero polynomial P(𝕏,𝕐,𝐙)=P~(𝕏)+j=0s1Pj(𝕏)(|γ|=jYγ𝐙γ)𝖫s(𝕏,𝕐) such that

  1. (a)

    P(𝕏,𝕐,𝐙)=P~(𝕏)+j=0r1Pj(𝕏)(|γ|=jYγ𝐙γ), that is, Pr(𝕏)==Ps1(𝕏)=0.

  2. (b)

    deg(P~)d+k1, and deg(Pj)d for all j[0,r1].

  3. (c)

    (Δ(α)P)(a,(wa,t(γ))|γ|<s)=0 for all αm with |α|[0,sr],t[],aAm.

The number of linear constraints is

|A|m(m+srsr)(5(sr+1)|A|m)m,

and the number of coefficients is

(m+d+k1d+k1)+r(m+dd)(r+1)(m+dd)(r+1)(dm)m.

So for the choice

d=5(r+1)1/m(sr+1)|A|,

we indeed get a nontrivial solution, that is, P(𝕏,𝕐,𝐙)0. Further, since each linear constraint is a polynomial in 𝐙=(Z1,,Zm) having degree at most s1, we can ensure that P(𝕏,𝕐,𝐙) is a polynomial in 𝐙 having deg𝐙(P)<ms (see [38] or [9, Lemma 3].)

Now consider any f(X)𝕂[X], and suppose 𝖽A(m,s)(f,S)=1(ν/|A|m1). By Lemma 20, this implies aAmμ𝚀(P[f],a)ν(sr+1). But we also have deg(P[f])(d+k1)|A|m1. Therefore, by Theorem 13, we can conclude that P[f](𝕏)=0 if ν>(d+k1)|A|m1/(sr+1), which holds if

ν>5(/(r+1))1/m(sr+1)|A|m1+k1sr+1.

The above is true if

ν 5(r+1)1/m|A|m1+ksr+1,
that is,ν|A|m1 5(r+1)1/m+k(sr+1)|A|m1.

Once we have an interpolating polynomial that captures the correct codewords, we can extract the output list as follows.

Proposition 23.

Consider parameters sr1, and k,d1 such that d+k1<[3]q. If P(𝕏,𝕐,𝐙)=P~(𝕏)+j=0r1Pj(𝕏)|γ|=jYγ𝐙γ𝖫s(𝕏,𝕐,𝐙) is a nonzero polynomial with

deg(P~)d+k1,anddeg(Pj)dfor all j[0,r1],

then the solution space

{f(𝕏)𝕂[𝕏]:P[f](𝕏)=0}

is an affine 𝕂-linear space with dimension at most (m+r2r2).

Proof.

Consider any f(𝕏)𝕂[𝕏] satisfying P[f](𝕏)=0. Note that we have P(𝕏,𝕐,𝐙)0. If Pj(𝕏)=0 for all j[0,r1], then P[f](𝕏)=P~(𝕏)=0, which means P(𝕏,𝕐,𝐙)=0, a contradiction. So there exists j[0,r1] such that Pj(𝕏)0. Without loss of generality, we assume Pr1(𝕏)0. (Otherwise, we work with the largest r such that Pr1(𝕏)0.)

Let us also quickly consider another notation. For any 𝚀β(𝕂×)m,αm, and g(𝕏)𝕂[𝕏], denote

gα,β:=coeff((𝕏𝚀β)𝚀(α)[α]𝚀!,g)=𝖣𝚀αg(𝚀β) by Proposition 8(b)

Recall that 𝕂=𝔽q3. Since deg(Pr1)d<[3]q, there exists 𝚀β(𝕂×)m such that Pr1(𝚀α+β)0 for all αm,|α|d+k1. Further, since deg(P)d+k1<[3]q, we will work with the basis of monomials {(𝕏𝚀β)𝚀(α):αm,|α|d+k1}. Now we have

P[f](𝕏)P~(𝕏)+j=0r1Pj(𝕏)|γ|=j𝖣𝚀γf(𝕏)𝐙γ=0.

So for every αm,|α|d+k1, we have

0=Pα,β[f] =P~α,β+j=0r1|γ|=j(θα[αθ]𝚀(Pj)αθ,θ+β(𝖣𝚀γf)θ,β𝐙γ)
=P~α,β+j=0r1θα[αθ]𝚀(Pj)αθ,θ+β(|γ|=jfθ+γ,β𝐙γ).

Now for every αm,|α|d+k1, since Pr1(𝚀α+β)=(Pr1)0m,α+β0, we can rewrite the above as

|γ|=r1fα+γ,β𝐙γ
=1(Pr1)0m,α+β((P~)α,β+(j,θ)(r1,α)(j,θ)(r1,α)[αθ]𝚀(Pj)αθ,θ+β(|γ|=jfθ+γ,β𝐙γ)) (3)

So we can set the undetermined coefficients among fγ,β𝕂,|γ|r2 freely, and the remaining coefficients of f(𝕏) are uniquely determined the above equation. This means the solution space is an affine 𝕂-linear space having dimension at most (m+r2r2). Further, at any instance of (3), if we know the R.H.S. and determine the L.H.S., then each coefficient fα+γ,β can be recovered by instantiating an arbitrary finite grid Sm with S𝕂,|S|ms as a hitting set for the 𝐙-variables, since we have deg𝐙(P)<ms.

We are now ready to complete the proof of our main theorem.

Proof of Theorem 19.

Choosing r=O(/ϵm), and plugging in all the parameters into Proposition 17 and Proposition 18 immediately proves the claim.

References

  • [1] Noga Alon. Combinatorial Nullstellensatz. Combinatorics, Probability and Computing, 8(1-2):7–29, 1999. doi:10.1017/S0963548398003411.
  • [2] Omar Alrabiah, Venkatesan Guruswami, and Ray Li. Randomly Punctured Reed–Solomon Codes Achieve List-Decoding Capacity over Linear-Sized Fields. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 1458–1469, New York, NY, USA, 2024. Association for Computing Machinery. doi:10.1145/3618260.3649634.
  • [3] G.E. Andrews. q-Series: Their Development and Application in Analysis, Number Theory, Combinatorics, Physics and Computer Algebra, volume 66 of CBMS Regional Conference Lecture Series in Mathematics. AMS, 1986.
  • [4] M.H. Annaby and Z.S. Mansour. q-Taylor and interpolation series for Jackson q-difference operators. Journal of Mathematical Analysis and Applications, 344(1):472–483, 2008. doi:10.1016/j.jmaa.2008.02.033.
  • [5] László Babai, Lance Fortnow, Leonid A. Levin, and Mario Szegedy. Checking computations in polylogarithmic time. In Proceedings of the Twenty-Third Annual ACM Symposium on Theory of Computing, STOC ’91, pages 21–32, New York, NY, USA, 1991. Association for Computing Machinery. doi:10.1145/103418.103428.
  • [6] Simeon Ball and Oriol Serra. Punctured combinatorial nullstellensätze. Combinatorica, 29(5):511–522, 2009. doi:10.1007/s00493-009-2509-z.
  • [7] Eli Ben-Sasson, Swastik Kopparty, and Jaikumar Radhakrishnan. Subspace Polynomials and Limits to List Decoding of Reed–Solomon Codes. IEEE Transactions on Information Theory, 56(1):113–120, 2010. doi:10.1109/TIT.2009.2034780.
  • [8] Siddharth Bhandari, Prahladh Harsha, Mrinal Kumar, and Ashutosh Shankar. Algorithmizing the Multiplicity Schwartz-Zippel Lemma. In Nikhil Bansal and Viswanath Nagarajan, editors, Proceedings of the 2023 ACM-SIAM Symposium on Discrete Algorithms, SODA 2023, Florence, Italy, January 22-25, 2023, pages 2816–2835. SIAM, 2023. doi:10.1137/1.9781611977554.ch106.
  • [9] Siddharth Bhandari, Prahladh Harsha, Mrinal Kumar, and Madhu Sudan. Decoding Multivariate Multiplicity Codes on Product Sets. IEEE Transactions on Information Theory, 70(1):154–169, 2024. doi:10.1109/TIT.2023.3306849.
  • [10] George Boole. A Treatise on the Calculus of Finite Differences. MacMillan and Co., 1860. Reprint: Cambridge University Press, 2009. doi:10.1017/CBO9780511693014.
  • [11] Joshua Brakensiek, Sivakanth Gopi, and Visu Makam. Generic Reed-Solomon Codes Achieve List-Decoding Capacity. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, pages 1488–1501, New York, NY, USA, 2023. Association for Computing Machinery. doi:10.1145/3564246.3585128.
  • [12] Yeyuan Chen and Zihan Zhang. Explicit Folded Reed-Solomon and Multiplicity Codes Achieve Relaxed Generalized Singleton Bound. arXiv Preprint, 2024. doi:10.48550/arXiv.2408.15925.
  • [13] David Cox, John Little, and Donald O’Shea. Ideals, varieties, and algorithms: an introduction to computational algebraic geometry and commutative algebra. Springer Science & Business Media, 2013.
  • [14] P. Delsarte, J.M. Goethals, and F.J. Mac Williams. On generalized Reed Muller codes and their relatives. Information and Control, 16(5):403–442, 1970. doi:10.1016/S0019-9958(70)90214-7.
  • [15] Zeev Dvir, Swastik Kopparty, Shubhangi Saraf, and Madhu Sudan. Extensions to the Method of Multiplicities, with Applications to Kakeya Sets and Mergers. SIAM Journal on Computing, 42(6):2305–2328, 2013. doi:10.1137/100783704.
  • [16] Peter Elias. List Decoding for Noisy Channels. Ph.d. thesis, Massachusetts Institute of Technology, 1957. RLE Technical Report No. 335. http://hdl.handle.net/1721.1/4484.
  • [17] Thomas Ernst. A Comprehensive Treatment of q-Calculus. Birkhäuser Basel, 2012. doi:10.1007/978-3-0348-0431-8.
  • [18] H. Exton. q-Hypergeometric Functions and Applications. Halsted Press, Chichister, 1983.
  • [19] Bálint Felszeghy, Balázs Ráth, and Lajos Rónyai. The lex game and some applications. Journal of Symbolic Computation, 41(6):663–681, 2006. doi:10.1016/j.jsc.2005.11.003.
  • [20] Dominique Foata and Guo-Niu Han. The q-Series in Combinatorics: Permutation Statistics (Lecture addendums, revised version). Unpublished, 2021. Available at https://irma.math.unistra.fr/~foata/paper/pub91.pdf.
  • [21] George Gasper and Mizan Rahman. Basic Hypergeometric Series. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2 edition, 2004. doi:10.1017/CBO9780511526251.
  • [22] Carl Friedrich Gauß. Summatio quarumdam serierum singularium. Dieterich, 1808. Digitized version available at http://eudml.org/doc/203313.
  • [23] O. Geil and U. Martínez-Peñas. Bounding the Number of Common Zeros of Multivariate Polynomials and their Consecutive Derivatives. Combinatorics, Probability and Computing, 28(2):253–279, 2019. doi:10.1017/S0963548318000342.
  • [24] Rohan Goyal, Prahladh Harsha, Mrinal Kumar, and Ashutosh Shankar. Fast list-decoding of univariate multiplicity and folded Reed-Solomon codes. arXiv Preprint, 2023. doi:10.48550/arXiv.2311.17841.
  • [25] Zeyu Guo, Mrinal Kumar, Ramprasad Saptharishi, and Noam Solomon. Derandomization from algebraic hardness. SIAM Journal on Computing, 51(2):315–335, 2022. doi:10.1137/20M1347395.
  • [26] Zeyu Guo and Zihan Zhang. Randomly Punctured Reed-Solomon Codes Achieve the List Decoding Capacity over Polynomial-Size Alphabets. arXiv Preprint, 2023. doi:10.48550/arXiv.2304.01403.
  • [27] V. Guruswami and M. Sudan. Improved decoding of Reed-Solomon and algebraic-geometric codes. In Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No.98CB36280), pages 28–37, 1998. doi:10.1109/SFCS.1998.743426.
  • [28] Venkatesan Guruswami and Atri Rudra. Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy. IEEE Transactions on Information Theory, 54(1):135–150, 2008. doi:10.1109/TIT.2007.911222.
  • [29] Venkatesan Guruswami and Carol Wang. Linear-algebraic list decoding for variants of reed–solomon codes. IEEE Transactions on Information Theory, 59(6):3257–3268, 2013. doi:10.1109/TIT.2013.2246813.
  • [30] Wolfgang Hahn. Über Orthogonalpolynome, die q-Differenzengleichungen genügen. Mathematische Nachrichten, 2(1-2):4–34, 1949. doi:10.1002/mana.19490020103.
  • [31] F. H. Jackson. A q-form of Taylor’s theorem. Messenger of Mathematics, 38:62–64, 1909.
  • [32] F. H. Jackson. A q-series corresponding to Taylor’s series. Messenger of Mathematics, 39:26–28, 1909.
  • [33] F. H. Jackson. On q-Functions and a certain Difference Operator. Transactions of the Royal Society of Edinburgh, 46(2):253–281, 1909. doi:10.1017/S0080456800002751.
  • [34] F. H. Jackson. On q-Definite Integrals. The Quarterly Journal of Pure and Applied Mathematics, 41:193–203, 1910.
  • [35] F. H. Jackson. q-Difference Equations. American Journal of Mathematics, 32(4):305–314, 1910. doi:10.2307/2370183.
  • [36] Charles Jordan. Calculus of Finite Differences. Chelsea Publishing Company, New York, 2nd edition, 1950.
  • [37] Victor Kac and Pokman Cheung. Quantum Calculus. Springer New York, NY, 2002. doi:10.1007/978-1-4613-0071-7.
  • [38] R. Kannan. Solving systems of linear equations over polynomials. Theoretical Computer Science, 39:69–88, 1985. Third Conference on Foundations of Software Technology and Theoretical Computer Science. doi:10.1016/0304-3975(85)90131-8.
  • [39] T. Kasami, Shu Lin, and W. Peterson. New generalizations of the Reed-Muller codes–I: Primitive codes. IEEE Transactions on Information Theory, 14(2):189–199, 1968. doi:10.1109/TIT.1968.1054127.
  • [40] T. Kasami, Shu Lin, and W. Peterson. Polynomial codes. IEEE Transactions on Information Theory, 14(6):807–814, 1968. doi:10.1109/TIT.1968.1054226.
  • [41] John Y. Kim and Swastik Kopparty. Decoding Reed-Muller Codes over Product Sets. Theory of Computing, 13(21):1–38, 2017. doi:10.4086/toc.2017.v013a021.
  • [42] Wolfram Koepf, Predrag M. Rajković, and Sladjana D. Marinković. Properties of q-holonomic functions. Journal of Difference Equations and Applications, 13(7):621–638, 2007. doi:10.1080/10236190701264925.
  • [43] Swastik Kopparty. List-Decoding Multiplicity Codes. Theory of Computing, 11(5):149–182, 2015. URL: 10.4086/toc.2015.v011a005, doi:10.4086/TOC.2015.V011A005.
  • [44] Swastik Kopparty, Noga Ron-Zewi, Shubhangi Saraf, and Mary Wootters. Improved List Decoding of Folded Reed-Solomon and Multiplicity Codes. SIAM Journal on Computing, 52(3):794–840, 2023. doi:10.1137/20M1370215.
  • [45] Swastik Kopparty, Shubhangi Saraf, and Sergey Yekhanin. High-Rate Codes with Sublinear-Time Decoding. J. ACM, 61(5), 2014. doi:10.1145/2629416.
  • [46] S.F. Lacroix. Traité des Différences et des Séries. Vol. 3 of Traité du Calcul Differentiel et du Calcul Intégral. Courcier, Paris, 1819.
  • [47] D. Larsson and S.D. Silvestrov. Burchnall-Chaundy Theory for q-Difference Operators and q-Deformed Heisenberg Algebras. Journal of Nonlinear Mathematical Physics, 10(sup2):95–106, 2003. doi:10.2991/jnmp.2003.10.s2.7.
  • [48] Rudolf Lidl and Harald Niederreiter. Finite Fields. Cambridge University Press, 2nd edition, 1997.
  • [49] J. Massey. Shift-register synthesis and BCH decoding. IEEE Transactions on Information Theory, 15(1):122–127, 1969. doi:10.1109/TIT.1969.1054260.
  • [50] Tamás Mészáros. S-extremal set systems and Gröbner bases (Diploma Thesis). PhD thesis, Budapest University of Technology and Economics, 2005. Available at https://sites.google.com/view/tamas-meszaros/.
  • [51] Shay Moran and Cyrus Rashtchian. Shattered Sets and the Hilbert Function. In Piotr Faliszewski, Anca Muscholl, and Rolf Niedermeier, editors, 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016), volume 58 of Leibniz International Proceedings in Informatics (LIPIcs), pages 70:1–70:14, Dagstuhl, Germany, 2016. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.MFCS.2016.70.
  • [52] D.E. Muller. Application of Boolean algebra to switching circuit design and to error detection. Transactions of the I.R.E. Professional Group on Electronic Computers, EC-3(3):6–12, 1954. doi:10.1109/IREPGELC.1954.6499441.
  • [53] Rasmus Refslund Nielsen. List decoding of linear block codes. Ph.d. thesis, Technical University of Denmark, 2001. Available at https://orbit.dtu.dk/files/3028444/List%20decoding%20of%20linear%20block%20codes.pdf.
  • [54] Niels Erik Nörlund. Vorlesungen über Differenzenrechnung. Springer Berlin, 1924.
  • [55] R. Pellikaan and Xin-Wen Wu. List decoding of q-ary Reed-Muller codes. IEEE Transactions on Information Theory, 50(4):679–682, 2004. doi:10.1109/TIT.2004.825043.
  • [56] W. Peterson. Encoding and error-correction procedures for the Bose-Chaudhuri codes. IRE Transactions on Information Theory, 6(4):459–470, 1960. doi:10.1109/TIT.1960.1057586.
  • [57] I. Reed. A class of multiple-error-correcting codes and the decoding scheme. Transactions of the IRE Professional Group on Information Theory, 4(4):38–49, 1954. doi:10.1109/TIT.1954.1057465.
  • [58] I. S. Reed and G. Solomon. Polynomial Codes Over Certain Finite Fields. Journal of the Society for Industrial and Applied Mathematics, 8(2):300–304, 1960. doi:10.1137/0108018.
  • [59] C.H. Richardson. An Introduction to the Calculus of Finite Differences. D. Van Nostrand Company, Inc., New York, 1954.
  • [60] Steven Roman. The Umbral Calculus. Dover Publications, 2005.
  • [61] Noga Ron-Zewi, S. Venkitesh, and Mary Wootters. Efficient List Decoding of Polynomial Ideal Codes with Optimal List Size. arXiv Preprint, 2024. doi:10.48550/arXiv.2401.14517.
  • [62] Eugene Spiegel and Christopher O’Donnell. Incidence Algebras. CRC Press, 1997.
  • [63] Shashank Srivastava. Improved List Size for Folded Reed-Solomon Codes. arXiv Preprint, 2024. doi:10.48550/arXiv.2410.09031.
  • [64] J.F. Steffensen. Interpolation. The Williams & Wilkins Company, Baltimore, 1927.
  • [65] Madhu Sudan. Decoding of Reed Solomon Codes beyond the Error-Correction Bound. Journal of Complexity, 13(1):180–193, 1997. doi:10.1006/jcom.1997.0439.
  • [66] Itzhak Tamo. Tighter list-size bounds for list-decoding and recovery of folded Reed-Solomon and multiplicity codes. IEEE Transactions on Information Theory (Early Access), pages 1–1, 2024. doi:10.1109/TIT.2024.3402171.
  • [67] L. Welch. Error Correction for Algebraic Block Codes. IEEE Int. Symp. Inform. Theory, 1983. URL: https://cir.nii.ac.jp/crid/1573668924281711104.
  • [68] E. Weldon. New generalizations of the Reed-Muller codes–II: Nonprimitive codes. IEEE Transactions on Information Theory, 14(2):199–205, 1968. doi:10.1109/TIT.1968.1054128.
  • [69] J.M. Wozencraft. List Decoding. Quarterly Progress Report, Research Lab of Electronics, MIT, 48:90–95, 1958.