Abstract 1 Introduction 2 Preliminaries 3 List-Recovery from Erasures over Prime Fields 4 List-Recovery from Errors over Arbitrary Fields References

List-Recovery of Random Linear Codes over Small Fields

Dean Doron ORCID Ben-Gurion University of the Negev, Beer-Sheva, Israel Jonathan Mosheiff ORCID Ben-Gurion University of the Negev, Beer-Sheva, Israel Nicolas Resch ORCID Informatics Institute, University of Amsterdam, The Netherlands João Ribeiro ORCID Instituto de Telecomunicações and Departamento de Matemática, Instituto Superior Técnico, Universidade de Lisboa, Portugal
Abstract

We study list-recoverability of random linear codes over small fields, both from errors and from erasures. We consider codes of rate ε-close to capacity, and aim to bound the dependence of the output list size L on ε, the input list size , and the alphabet size q. Prior to our work, the best upper bound was L=qO(/ε) (Zyablov and Pinsker, Prob. Per. Inf. 1981).

Previous work has identified cases in which linear codes provably perform worse than non-linear codes with respect to list-recovery. While there exist non-linear codes that achieve L=O(/ε), we know that LΩ(1/ε) is necessary for list recovery from erasures over fields of small characteristic, and for list recovery from errors over large alphabets.

We show that in other relevant regimes there is no significant price to pay for linearity, in the sense that we get the correct dependence on the gap-to-capacity ε and go beyond the Zyablov–Pinsker bound for the first time. Specifically, when q is constant and ε approaches zero,

  • For list-recovery from erasures over prime fields, we show that LC1/ε. By prior work, such a result cannot be obtained for low-characteristic fields.

  • For list-recovery from errors over arbitrary fields, we prove that LC2/ε.

Above, C1 and C2 depend on the decoding radius, input list size, and field size. We provide concrete bounds on the constants above, and the upper bounds on L improve upon the Zyablov–Pinsker bound whenever q2(1/ε)c for some small universal constant c>0.

Keywords and phrases:
List recovery, random linear codes
Category:
RANDOM
Funding:
Dean Doron: Supported in part by NSF-BSF grant #2022644.
Jonathan Mosheiff: Supported by Israel Science Foundation grant 3450/24 and an Alon Fellowship.
Nicolas Resch: Supported by an NWO (Dutch Research Council) grant with number C.2324.0590.
João Ribeiro: Supported by FCT/MECI through national funds and when applicable co-funded EU funds under UID/50008: Instituto de Telecomunicações.
Copyright and License:
[Uncaptioned image] © Dean Doron, Jonathan Mosheiff, Nicolas Resch, and João Ribeiro; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Error-correcting codes
Related Version:
Full Version: https://www.arxiv.org/abs/2505.05935 [3]
Funding:
This work was done in part while the authors were visiting the Simons Institute for the Theory of Computing, supported by DOE grant #DE-SC0024124.
Editors:
Alina Ene and Eshan Chattopadhyay

1 Introduction

Error-correcting codes enable reliable communication over noisy channels by encoding messages mΣk as codewords cΣn. A code 𝒞Σn of rate R=k/n and minimum (relative) Hamming distance δ allows for reliable communication over an adversarial noisy channel that corrupts up to a δ/2-fraction of codeword symbols. To tolerate more corruptions, one can relax unique decoding to list decoding, where the decoder outputs all codewords within a given Hamming radius.

This notion is further generalized by list recovery (from errors), which models scenarios where the receiver gets a small list of possible values for each symbol. Formally, a code 𝒞 is said to be (ρ,,L)-list-recoverable if for every sequence of sets T1,,TnΣ with |Ti|, we have

|𝒞Bρ(T1××Tn)|L,

where Bρ(T1××Tn) denotes the Hamming ball consisting of all words in Σn that agree with T1,,Tn in at least (1ρ)n coordinates. When =1, list-recovery from errors reduces to standard list-decoding (from errors).

A related variant is list recovery from erasures, where some coordinates are entirely unknown, modeled by setting Ti=Σ. A code is said to be (α,,L)-list-recoverable from erasures if

|𝒞(T1××Tn)|L

whenever |Ti| for at least (1α)n positions. Here too, the case =1 corresponds to list-decoding from erasures.

List recoverable codes are used as a building block for list-decodable and uniquely decodable codes [13, 14, 15, 16, 26, 9, 21]. They have also gained a significant independent interest, in part due to their applications in pseudorandomness [38, 19, 4, 24], algorithms (in particular, for heavy hitters, compressed sensing, and combinatorial group testing [23, 34, 28, 7, 5]), and cryptography [20, 22].

For both list-recovery from errors and from erasures, there exists a well-defined capacity threshold that characterizes the maximal achievable rate for which bounded list-size decoding is possible. Specifically, given parameters ρ, , and alphabet size q, there is a critical rate R=R(ρ,,q) such that:

  • For every ε>0 and any large enough block length, there exist codes of rate Rε that are (ρ,,L)-list-recoverable (from errors or erasures) with

    L=O,ε(1). (1)
  • For every ε>0 and any large enough block length n, no code of rate R+ε is (ρ,,L)-list-recoverable for L=qo(n).

The exact threshold depends on the recovery model:

  • For (ρ,,L)-list-recoverability from errors, the threshold rate is

    Rerrors=1hq,(ρ),

    where

    hq,(ρ)=ρlogq(qρ)+(1ρ)logq(1ρ),

    valid for 0ρ1q [35, Theorem 2.4.12].

  • For (α,,L)-list-recoverability from erasures, the corresponding threshold is

    Rerasures=(1α)(1logq).

The dependence of the list size L on the parameters and ε (see Equation 1) is often critical, and has been the focus of extensive research (e.g., [36, 37, 32, 10, 17, 8, 39, 2, 31]).

Using the probabilistic method, it is easy to show that plain random codes achieve L=O(/ε), and this dependence is often viewed as the optimal benchmark. Codes achieving this tradeoff are said to match the Elias bound for list-recovery, in reference to the analogous threshold in list-decoding [6] (see also [33]).

To set expectations for list-recovery of linear codes, we recall the classic argument of Zyablov and Pinsker [40], adapted to the setting of list-recovery by Guruswami [11]. Since any set of L+1 vectors has a linearly independent subset of size logq(L+1), and since the events that linearly independent vectors lie in a linear code are stochastically independent, the argument for plain random codes gives L=qO(/ε). This naturally raises the question: what is the actual price of linearity in list-recovery? While various forms of degradation are possible, we seek to understand how the requirement of linearity affects the list-size achievable near capacity.

Prior work

Most previous results concerning the output list size have focused on the large alphabet regime, where q is at least exponential in 1/ε. In this setting, Li and Shagrithaya [31] recently proved that random linear codes are almost surely list-recoverable with list size L(/ε)O(/ε). By a reduction between code ensembles [30], the same upper bound also holds with high probability for Reed–Solomon codes over random evaluation sets. On the other hand, [31] proved that all (ρ,,L)-list-recoverable (from errors) linear codes over a large alphabet must satisfy LΩ(R/ε), implying an exponential gap in the list-size between linear codes and plain random codes. A similar negative result was previously proven by Chen and Zhang [2] for Reed-Solomon codes and folded Reed–Solomon codes. Recently, Komech and Mosheiff [25] constructed a new ensemble of non-linear codes that achieve LO(/ε) in list-recovery from errors. These are the only codes other than plain random codes known to achieve the list-recovery Elias bound.

Older works of Rudra and Wootters [36, 37], incomparable to [31], show (in our terms) that random linear codes achieve list-size L1εlog2(/ε) for list-recovery (from errors) in the large alphabet regime, but only under the guarantee that ρ=1Ω(ε).

Other works concern the list-recoverability of folded Reed–Solomon codes, multiplicity codes, tensor codes, and variants of them (e.g., [27, 21, 39]). In particular, Tamo [39] shows that folded Reed–Solomon codes and multiplicity codes achieve list size L(/ε)O(1+logε) (in the errors case) in the large alphabet regime.

In contrast, very little is known about list-recovery in the small alphabet regime, where q is sub-exponential in 1/ε. To the best of our knowledge, the only positive result in this setting is the aforementioned L=qO(/ε) for random linear codes [40]. On the other hand, we know that when q is a power of a small prime, random linear codes in this regime are very unlikely to be (α,,L)-list-recoverable from erasures with o(1/ε) [17] (we state this as Theorem 1 below).

Provable separation between linear and nonlinear codes were also established in the setting of list decoding from erasures. For the special case of 𝔽2, and aiming for an erasure decoding radius of α=1ε, Guruswami [11] showed that the output list size must satisfy L=Ω(1/ε), as long as the rate is sufficiently non-vanishing. In contrast, plain random codes achieve L=O(log(1/ε)), and we even have explicit constructions that nearly match plain random codes in some parameter regimes [1].

1.1 Our Contribution

We establish new upper bounds on the output list size for list-recovery of random linear codes near capacity in the small alphabet regime. To the best of our knowledge, these are the only known bounds for general list-recovery in this setting beyond the classical Zyablov–Pinsker argument. Notably, our results achieve list sizes with only linear dependence on 1/ε, in contrast to the exponential dependence in previous bounds.

List-recovery from erasures over prime fields

For our first result, we consider (α,,L)-list-recovery from erasures. Recall that the capacity in this case is

Rerasures(α,,q)=(1α)(1logq).

As mentioned before, [17] showed that there is a price to pay for linearity over fields of small characteristic. More precisely, they proved the following.

Theorem 1 (informal; see [17, Theorem III.1]).

If divides char(𝔽q), then with high probability over the choice of the linear code, the output list size L cannot be taken smaller than Ω(1/ε).

We show that this limitation disappears when considering a prime field size q. In this case, we can make the output list size Oα,,q(1/ε).

Theorem 2 (list recovery from erasures over prime fields; see Theorem 25).

Given 1q with q prime and α[0,1), there exists C(α,,q)>0 such that the following holds. Let 𝒞𝔽qn be a random linear code of rate Rerasures(α,,q)ε for some ε>0. Then, 𝒞 is with high probability (α,,C(α,,q)ε)-list-recoverable from erasures.

While our focus is on the setting where α and q are constants, we determine effective bounds on C(α,,q) even when α is a function of n, and q is slightly super-constant.111For a concrete example, when (1γ)q for some constant γ, and α is bounded away from 1, we can take C(α,,q)qO(logq) as long as nC(α,,q). We refer the reader to Theorem 25 for the precise bound.

List-recovery from errors

Next, we consider the case of list-recovery from errors, where we recall the capacity is

Rerrors(ρ,,q)=1hq,(ρ).

Here, contrary to what happens in the regime of large q-s, we do not observe any price to pay for linearity, at least in terms of the dependence on the gap-to-capacity ε.

Theorem 3 (list recovery from errors; see Theorem 31).

Given 1q with q a prime power and ρ(0,1/q), there exists C(ρ,,q)>0 such that the following holds. Let 𝒞𝔽qn be a random linear code of rate Rerrors(ρ,,q)ε for some ε>0. Then, 𝒞 is with high probability (ρ,,C(ρ,,q)ε)-list-recoverable from errors.

Again, we determine effective bounds on the numerator C(ρ,,q), which can be found in Theorem 31.222Specifically, when ρ is bounded away from both 0 and 1/q, we can take C(ρ,,q)qpolylog(q). See Theorem 31 for the precise bound. For context, we recall that in the “large q regime” (say, q1/ε), such a result is provably impossible. Specifically, when q is large, the list-recovery capacity is (essentially) the Singleton bound, namely, Rerrors1ρ. It is known that at rate 1ρε, any linear code list-recoverable from errors must have LΩ(R/ε) [31]. That is, the dependence of L on the gap-to-capacity must be exponential. But in our case, at least if ρ, and q are held constant, the dependence of L on ε is just O(1/ε).

We conclude this section with some remarks.

 Remark 4 (on the field insensitivity).

Both of our results show that there is no significant price to pay for linearity in list-recovery for certain parameter regimes. Our result in Theorem 3 on list-recovery from errors is insensitive to the field size. On the other hand, as discussed above, our result in Theorem 2 on list-recovery from erasures is (necessarily!) field sensitive. We provide some informal discussion on why this happens. First, note that we work with rates close to capacity, and the capacity in the erasures setting is larger for comparable α and ρ. Second, looking ahead (see Section 1.2 for more details), list-recovery from erasures depends on the “additive structure” of T1××Tn, for arbitrary size- subsets T1,,Tn𝔽q. If the Ti’s are subspaces of 𝔽q, then it is quite likely these form a bad configuration for list-recovery from erasures. In contrast, list-recovery from errors depends on the additive structure of the “puffed-up” combinatorial rectangles

Bρ(T1×T2××Tn)={x𝔽qn:xiTi for at most ρn choices of i[n]}.

Even if the Ti’s are subspaces of 𝔽q, the “puffing up” operation kills any additive structure that could lead to a bad list-recovery configuration.

 Remark 5 (on the dependence of L on the various parameters).

Recall that a code which is ε-close to capacity is said to achieve the Elias bound if L=O(/ε). Note that in both Theorem 2 and Theorem 3, the dependence of L on ε is as we would hope. However, the dependence on the other parameters (particularly and q) is exponentially worse than the O() dependence. We leave it as a natural open problem to improve on this dependency.

 Remark 6 (comparison to [40]).

As mentioned earlier, we are not aware of any prior arguments establishing non-trivial bounds on the list-size L for codes ε-close to capacity which do not require q to be large, other than what follows from the Zyablov–Pinsker argument [40] (given formally in [11]). Recall that this method guarantees list size L=qO(/ε). In comparison, we obtain (roughly) L=1εqO(logq) in the case of erasures (over prime fields), and L=1εqpolylog(q) in the case of errors (over all fields, assuming ρ is not too close to 0 or 1/q). Thus, compared to [40], we obtain a smaller bound on L once, roughly, ε<1polylog(q). In particular, when and q are constants we achieve asymptotic improvement in L.

1.2 Technical Overview

At a conceptual level, our work reconsiders the approach of Guruswami, Håstad, and Kopparty [12], which allowed for an understanding of the list-decodability from errors of random linear codes over constant-sized alphabets, and adapts it to the case of list-recovery (either from erasures or errors). Recall that list-decoding from errors is the special case of list-recovery from errors with =1.

Using terminology that we define in this work, the first step in the argument of [12] is to argue that Hamming balls are nontrivially mixing. Specifically, fix any center z𝔽qn and consider sampling twice uniformly and independently from the Hamming ball of radius ρ centered at z; denote the two samples by X and X. The authors argue that for some δ>0 and any α,β𝔽q{0}, it follows that for any other center y𝔽qn, Pr[αX+βXBρ(y)]qδn for some δ=δ(ρ,q)>0.333In fact for [12] it sufficed to only consider the z=0 case, in which case one can always assume α=β=1. But their argument naturally generalizes to this case, and this is the notion of mixing that we require for our list-recovery results. Alternatively, we could say that for any y𝔽qn, we have Pr[αX+βXy+Bρ(z)]qδn. From here, using some additional tools like 2-increasing chains – whose existence they establish via a Ramsey-theoretic argument – they are able to show that random linear codes with ε gap-to-capacity are with high probability (ρ,C(ρ,q)ε)-list-decodable from errors. In particular, their techniques promise the correct dependence on ε (although the dependence on the parameters q and ρ is quite poor).

We recommend viewing this “δ-mixing” property in the following light. Any argument establishing that random linear codes have good list-decodability must somehow argue that random subspaces and Hamming balls don’t tend to “correlate” too much. In particular, it should not be the case that Hamming balls have noticeable “linear structure”, and in particular they should be “far” from being closed under addition.

In our work, we consider whether or not sets that are relevant for list-recovery, i.e., the sorts of sets that list-recoverable codes cannot intersect with too much, also have nontrivial mixing. Firstly, we crystallize in a definition what it means for any arbitrary set T𝔽qn to be δ-mixing, where δ>0: for any α,β𝔽q{0} and y𝔽qn, if X,XT – which denotes that X and X are sampled independently and uniformly from T – then Pr[αX+βXy+T]qδn.

Now, for (α,,L)-list-recovery from erasures the relevant sets are combinatorial rectangles T1×T2××Tn where for at least (1α)n values of i[n] we have |Ti|. For (ρ,,L)-list-recovery from errors the sets are “puffed-up” combinatorial rectangles. Namely, for T1,T2,,Tn𝔽q each with |Ti|, we consider list-recovery balls

Bρ(T1×T2××Tn)={x𝔽qn:xiTi for at most ρn choices of i[n]}.

Following the argument of [12], once we establish that these sets are nontrivially mixing, we can obtain bounds on the list-size with the correct dependence on ε. Our task then boils down to understanding the mixingness of the sets relevant for list-recovery. We consider first the erasures case, and subsequently discuss the errors case.

List-recovery from erasures

Firstly, observe that for a combinatorial rectangle T1××Tn, if each of the sets Ti is nontrivially mixing as a subset of 𝔽q (take the n=1 case of the above definition), then T1××Tn is also nontrivially mixing (as a subset of 𝔽qn). Hence, it suffices for us to consider whether or not subsets of 𝔽q mix. It is here that the dependence on the field size shows up.

Recall from the earlier discussion that [17] established that random linear codes over 𝔽t (where is a prime power) are with high probability not (α,,L)-list-recoverable from erasures unless LΩ(1/ε). Indeed, it is easy to find a subset T𝔽t which is not δ-mixing for any δ>0: take T=𝔽 (or, more generally, any multiplicative coset γ𝔽 for γ𝔽t{0}). Since 𝔽 is closed under addition, in particular we have that if X and X are sampled independently and uniformly from T then Pr[X+XT]=1, so T is not δ-mixing for any δ>0.

What went wrong in this example? The fact that 𝔽t contains 𝔽 as a subfield means that 𝔽t contains non-trivial 𝔽-linear subspaces. Such subspaces naturally create “bad” input lists, and the argument of [17] establishes that indeed a random linear code is likely to contain many vectors from a combinatorial rectangle T1××Tn where at least a (1α) fraction of the Ti’s are 1-dimensional 𝔽-subspaces of 𝔽t.

If we insist that q be a prime, then 𝔽q does not have any non-trivial subspaces. However, in this case 𝔽q still contains some subsets with “additive structure;” for example, taking to be odd here for simplicity, centered intervals444Or, more generally, arithmetic progressions. like I={12,12+1,121,12}𝔽q have the property that if one samples X,XI independently, then Pr[X+XI]34+142 assuming 2q/3. But note that this is still non-trivially bounded away from 1! Remarkably, an argument of Lev [29] shows that this is the worst case over prime fields. More precisely, over all sets T of size , to maximize Pr[αX+βXγ+T] where α,β,γ𝔽q with α,β0, one should choose T=I (the centered interval of length ), α=β=1 and γ=0. In our technical section we give an effective bound on Pr[X+XI] for all <q, which allows us to argue that combinatorial rectangles T1××Tn in which at least a 1α fraction of the Ti’s have size at most are nontrivially mixing, as desired.

List-recovery from errors

We now wish to establish that list-recovery balls are nontrivially mixing. Notably, unlike in the case of erasures, the argument here is insensitive to the base field. Let T=T1××Tn, where each |Ti|. Let X,XBρ(T); in this overview, we will sketch how one bounds Pr[X+XBρ(T)] (the argument easily generalizes to allow for multipliers α,β𝔽q{0} and a shift y𝔽qn).

Unlike in the case of combinatorial rectangles, it is not the case that X=(X1,,Xn) and X=(X1,,Xn) have independent coordinates. For example, conditioned on X1 lying in T1, then X2 is less likely to lie in T2. However, these correlations are relatively minor, and we can essentially “pretend” that both X and X are sampled as follows: for each i[n], with probability 1ρ set the i-th coordinate to a uniformly random element of Ti, and otherwise set it to a uniformly random element of 𝔽qTi, and these choices are made independently for each i[n].555In fact for technical reasons we have to consider Xi lying in Ti with probability ω for ωρ, and similarly Xi lies in Ti with probability ωρ. But by a concentration argument one can easily establish that ω and ω are with high probability very close to ρ. We remark that a similar trick is implicit in [12], and made explicit in the context of rank-metric codes by Guruswami and Resch [18].

This new distribution is much more amenable to analysis. In particular, letting Ei be the indicator for the event Xi+XiTi, then X+XBρ(T) iff iEi(1ρ)n. Thus, if we can argue Pr[Ei=1]<1ρ then a classic Chernoff-Hoeffding bound establishes Pr[iEi(1ρ)n] is exponentially small, implying the desired δ-mixing.

Bounding Pr[Ei=1] is the most novel part of the analysis, and is done in Lemma 26. Recall that Xi,XiTi, and let also Yi,Yi𝔽qTi. We have

Pr[Ei=1]=(1ρ)2Pr[Xi+XiTi]+2ρ(1ρ)Pr[Xi+YiTi]+ρ2Pr[Yi+YiTi]=ziTi(1ρ)2Pr[Xi+Xi=zi]+2ρ(1ρ)Pr[Xi+Yi=zi]+ρ2Pr[Yi+Yi=zi]. (2)

As is standard, Pr[Xi+Xi=zi] is proportional to 1Ti1Ti(zi), the convolution of the indicator functions for Ti, and similarly Pr[Xi+Yi=zi] and Pr[Yi+Yi=zi] are proportional to 1Ti1𝔽qTi(zi) and 1𝔽qTi1𝔽qTi(zi), respectively. Using the simple identity 1𝔽qTi=11Ti, we can rewrite Equation 2 as

c1Ti1Ti(zi)+other terms,

where c is a constant which we show to be positive assuming ρ(0,1/q) (and indeed, if ρ>1/q then it could be negative). Upon giving a trivial upper bound 1Ti1Ti(zi) (which corresponds to the case that Ti does not mix at all, as we must do since we are making no assumptions on the field) and simplifying, we obtain the bound

Pr[Ei=1](1ρ)2+ρ2q.

To our satisfaction, we have that (1ρ)2+ρ2q<1ρ iff ρ(0,1/q), which is precisely the range of decoding radius at which we can hope for positive rate list-recovery from errors in the first place! Thus, we have that Pr[iEi(1ρ)n] is exponentially small, establishing that list-recovery balls nontrivially mix.

1.3 Open Problems

Lastly, we leave here some directions for future research:

  • In our results the dependency on ε is correct, but the dependency on and q is rather poor. Can we improve this dependency? Or, can we perhaps prove new lower bounds on L in terms of and q that apply when these parameters are not too big?

  • [17] showed that over 𝔽q with q=t (and hence small characteristic), a random linear code is with high probability not (α,,L)-list-recoverable from erasures unless LΩ(1/ε). Can we show that this lower bound actually applies to every q-ary linear code?

  • Many arguments for codes being list-recoverable from errors in fact establish the stronger property of average-radius-list-recovery, where now one instead shows that for any input lists T1,,Tn𝔽q of size , given L+1 codewords c(1),,c(L+1) one has

    1L+1j=1L+1i=1n1{ci(j)Ti}>ρn.

    This in particular implies that there cannot be L+1 codewords lying in a list-recovery ball of radius ρ. We believe our method should be able to establish this (slightly) stronger guarantee for random linear codes.

2 Preliminaries

2.1 Notation

We will often denote random variables and sets by uppercase Roman letters. The distinction will be clear from context. We write [n]={1,,n} for any positive integer n. For a vector v𝔽qn with 𝔽q the finite field of order q, we write Supp(v)={i:vi0}. We define the weight of v to be wt(v)=|Supp(v)|, and the Hamming distance between v and u is d(u,v)=|{i[n]:uivi}|=|Supp(uv)|. For a collection of vectors v1,,vd𝔽q, we denote by Span(v1,,vd) the subspace of 𝔽qd spanned by v1,,vd.

We denote the binary entropy function by h2(), and recall that h2(x)=xlog21x+(1x)log211x. For a positive integer q we shorthand expq(x)=qx. Additionally, by default log is the base-2 logarithm. We write 1A for the indicator function of a set A, and write 1{E} for the indicator random variable that equals 1 if and only if the event E holds.

2.2 The Random Code Model

For an alphabet size q, a plain random code 𝒞 of block length n and rate R[0,1] is obtained by including each x[q]n in 𝒞 with probability q(1R)n, and these choices are made independently for each x. (Note then that such a code has size qRn in expectation, and by a Chernoff bound it follows that it has rate R±o(1) with high probability.)

When q is a prime power, a random linear code 𝒞 of block length n and rate R[0,1] is obtained by sampling a uniformly random matrix H𝔽q(1R)n×n, and defining

𝒞={x𝔽qn:Hx=0}.

Note that H is full-rank with probability 1O(qRn), and therefore 𝒞 has rate R with high probability.

Given any subset S[q]n, if 𝒞 is a plain random code of rate R then Pr[S𝒞]=q(1R)n|S|. When dealing with random linear codes, the probability that a set appears in the code is determined by the span of the set.

Proposition 7.

Let 𝒞𝔽qn be a random linear code of block length n and rate R[0,1], and let x1,,xb𝔽qn. Then,

Pr[i[b],xi𝒞]=q(1R)ndim(Span(x1,,xb)).

2.3 List-Recovery Notions

This section collects the basic notions of list-recovery we study.

List-recovery from erasures

We begin with the relevant definition.

Definition 8 (list recovery from erasures).

Let 𝒞[q]n be a q-ary code of block-length n. For an erasure radius α[0,1) and input list size 1q, we say that 𝒞 is (α,,L)-list-recoverable from erasures if for every T1,,Tn[q] such that |Ti| for at least (1α)n of the i-s and Ti=[q] for the remaining, it holds that

|𝒞(T1××Tn)|L.

That is, in any combinatorial rectangle of which at least (1α)n of its side-lengths are at most (and the remainder can be as large as q), there are at most L codewords.

We will also consider list-recovery from errors. The concept of a list-recovery ball – which generalizes that of a Hamming ball – will be useful.

Definition 9 (list-recovery ball).

Let q and let T1,,Tn[q]. The list-recovery ball of radius ρ centered at T1××Tn is

Bρ(T1××Tn)={x[q]n:d(x,T1××Tn)ρn}.

Above, we have extended the Hamming metric by setting

d(x,T1××Tn)=|{i[n]:xiTi}|.

We now state the relevant capacity theorem for list-recovery from erasures. The proofs of the two implications are standard: the possibility result follows from analyzing the performance of a plain random code, while the impossibility result follows from a counting argument.

Theorem 10 (list-recovery from erasures capacity).

Let 1q and let α[0,1). Fix ε>0. For n large enough, the following hold:

  • There exists a code 𝒞[q]n of rate 1(1α)logqαε which is (α,,/ε)-list-recoverable from erasures.

  • For any code 𝒞[q]n of rate 1(1α)logqα+ε, there exist T1,,Tn[q] with |Ti| for at least (1α)n values of i[n] such that |𝒞(T1××Tn)|qεno(n).

We therefore say that the capacity for (α,,L)-list-recovery from erasures is 1α(1α)logq. We will study what happens for codes of rate 1(1α)logqαε for some ε>0, and determine the value of their output list-size L. We will be focused on the case where q is held constant, and the gap-to-capacity ε tends to 0.

List-recovery from errors

We now define what it means for a code to be list-recoverable from errors.

Definition 11 (list recovery from errors).

Let 𝒞[q]n be a q-ary code of block-length n. For a decoding radius ρ(0,1/q) and input list size 1q, we say that 𝒞 is (ρ,,L)-list-recoverable from errors if for every T1,,Tn[q] such that |Ti| for all i[n], it holds that

|𝒞Bρ(T1××Tn)|L.

That is, every list-recovery ball of radius ρ with side-lengths contains at most L codewords.

We will need an estimate on the size of list-recovery balls. It makes use of the (q,)-entropy function, defined as follows:

hq,(x)=xlogqqx+(1x)logqq1x. (3)

An operational interpretation of this quantity is as the base-q entropy of a random variable which, with probability 1x, samples a uniformly random element from a set of size , and with probability x samples a uniformly random element from the complement. Additionally, so long as 0<x<1/q it holds that 0<hq,(x)<1. Note that if =1 one recovers the q-entropy function, which we denote as hq (i.e., if is omitted from the subscript, then it is by default 1).

We now state the relevant estimate.

Proposition 12 ([35, Proposition 2.4.11]).

Let 1q be integers and ρ(0,1/q). Let T1,,Tn[q] with |Ti|= for all i[n]. Then,

qnhq,(ρ)2n|Bρ(T1××Tn)|qnhq,(ρ).

This estimate drives the following capacity theorem.

Theorem 13 (list-recovery from errors capacity).

Let 1q and let ρ(0,1/q). Fix ε>0. For n large enough, the following hold:

  • There exists a code 𝒞[q]n of rate 1hq,(ρ)ε which is (ρ,,/ε)-list-recoverable from errors.

  • For any code 𝒞[q]n of rate 1hq,(ρ)+ε, there exists T1,,Tn[q] with |Ti| for all i[n] such that |𝒞Bρ(T1××Tn)|qεno(n).

Thus, we will concern ourselves with codes of rate 1hq,(ρ)ε, and determine the output list size L for (ρ,,L)-list-recovery from errors. And, as in the list-recovery from erasures case, we will hold ρ,,q constants and consider the asymptotic behaviour of L as the gap-to-capacity ε0.

We will also need the following lower bound on the difference hq,(ρ)hq,(ρη). See the full version of the paper [3] for the proof.

Claim 14.

For any integers q>0 and 1q, any ρ(0,1/q], and any η[0,ρ], we have

hq,(ρ)hq,(ρη)ηlogq((q)(1ρ)ρ)0.

2.4 Increasing Chains

The following definition of an increasing chain was first introduced by Guruswami, Håstad, and Kopparty [12].

Definition 15 (c-increasing chain).

A sequence of vectors v1,,vd𝔽q is said to be a c-increasing chain of length d if for all j[d] we have

|Supp(vj)(i=1j1Supp(vi))|c.

We require the following lemma on the existence of appropriately long increasing chains in an appropriate shift of an arbitrary subset S𝔽q.

Lemma 16 ([12, Lemma 6.3]).

For every prime power q, and all positive integers c, and Lq, the following holds. For every S𝔽q with |S|=L, there is w𝔽q such that S+w has a c-increasing chain of length at least 1clogqL2(11c)logq((q1)).

2.5 Mixing Sets

In our analysis we need to understand the probability that the sum of two independent uniformly random samples X and X from a set T𝔽q lands in a shifted set T+γ, for an arbitrary shift γ𝔽q (and in fact a more general question of that form). We begin with the necessary definitions.

Definition 17 (mixing over 𝔽q).

For a prime power q, and δ0, we say that T𝔽q is δ-mixing, if for any α,β,γ𝔽q, where α and β are nonzero, it holds that

PrX,X[αX+βXT+γ]qδ,

where X,XT are independent and uniformly distributed over T, and T+γ={t+γ:tT}.

Definition 18 (mixing over 𝔽qn).

For n, a prime power q, and δ0, we say that T𝔽qn is δ-mixing, if for any nonzero α,β𝔽q, and any z𝔽qn, it holds that

PrX,X[αX+βXT+z]qδn,

where X,XT are independent and uniformly distributed over T.

 Remark 19.

Note that a set mixes nontrivially when δ>0, and moreover, we will want δ>0 to not depend on n. However, in the case of list recovery from erasures, where for some i-s, Si=𝔽q, the case of δ=0 will be useful towards bounding the expected mixing of S1,,Sn.

The following connection then follows easily.

 Remark 20.

Suppose that T1,,Tn𝔽q, and each Ti is δi-mixing. Then, T=T1××Tn is δ-mixing, for δ=𝔼i[n][δi].

3 List-Recovery from Erasures over Prime Fields

In this section we establish the list-recoverability of random linear codes over prime fields. To achieve this, we must first understand the mixing properties of worst-case subsets of prime fields. Most of the proofs are deferred to the full version [3].

3.1 Worst-Case Mixing of Subsets of Prime Fields

Towards understanding mixing of subsets of prime fields, we leverage a general result of Lev [29] which characterizes worst-case T-s, when q is prime. Before we introduce it, we set up some relevant notation.

For a set T𝔽q we let T~ denote a “centered interval” of length |T|. More precisely, T~ is the δ-centered interval associated with T if T~=[α,α+δ]𝔽q with α[0,q12] and δ{1,0,1} satisfying |T~|=2α+1+δ=|T|. Note that when |T| is odd there is a unique centered interval T~ (because δ=0 necessarily), but when |T| is even there are two centered intervals, corresponding to δ=±1.

Lemma 21 ([29, Theorem 1], adapted).

Let q3 be prime and A1,,Ak𝔽q be arbitrary sets with A~1,,A~k the associated δi-centered intervals. Then, if |δ1++δk|1, for any set B𝔽q and some associated δ-centered interval B~, we have

PrXiAi[X1++XkB]PrX~iA~i[X~1++X~kB~].

We can use Lemma 21 to prove the following.

Lemma 22.

Fix a prime q3. Let T1,T2,T3𝔽q be arbitrary sets of size >0. Then,

PrX1T1,X2T2[X1+X2T3]{34+142+max(0,32q1)(32q+1)42, if  is odd,34+max(0,32q)242, if  is even,

and this is tight for all . In particular:

  1. 1.

    When 2q/3 we have

    PrX1T1,X2T2[X1+X2T3]{34+142, if  is odd,34, if  is even.
  2. 2.

    When 2q/3<q1 we have

    PrX1T1,X2T2[X1+X2T3]q23(q)2.

We can then record the following corollary.

Corollary 23.

For a prime q3, any set T𝔽q of size q1,

  • If 22q/3, T is δ-mixing for δlogq(16/13).

  • Otherwise, T is δ-mixing for δlogq(2q23(q)).

3.2 List-Recovery from Erasures over Prime Fields via Mixing

In this section we adapt the technique in [12], together with the worst-case mixing result from Corollary 23, to establish list recovery from erasures over prime fields.

Lemma 24.

Let T𝔽qn be δ-mixing. For b and any a>0 satisfying nq8aδ, the following holds. Let X(1),,X(b) be sampled independently and uniformly at random from T. Then, we have that

Pr[|Span(X(1),,X(b))T|>Cb]qan,

where C=C(q,δ,a)=q8aδ. In particular, when T=T1××Tn, and each Ti is δi-mixing, we get the same result as above for δ=𝔼i[δi].

Proof.

Let E denote the bad event that we want to bound, namely |Span(X(1),,X(b))T|>A for A=bq8aδ. Note that E implies that there exists some set S𝔽qb, |S|=A+1, such that Xvi[b]viX(i)T for all vS.666Notice that if the Xi-s are not linearly independent, this can only decrease the probability that the intersection is large, so we can concentrate on the case that distinct v-s give rise to distinct Xv-s. Hence, it suffices to bound the probability that such a set S exists.

Fix some S𝔽qb of size A+1. Applying Lemma 16 with c=2 (and note that we can assume that A+1qb), we know there exists w𝔽qb such that S+w has a 2-increasing chain of length d=12logqA+1212logq((q1)b). That is, we have v(1),,v(d)S such that for all j[d],

|Supp(v(j)+w)(i=1j1Supp(v(i)+w))|2.

Now, we can bound

Pr[vS,XvT] Pr[j[d],Xv(j)T]
=Pr[j[d],Xv(j)+XwT+Xw]
=Pr[j[d],Xv(j)+wT+Xw]. (4)

Next, we bound Equation 4 by

y𝔽qnPr[j[d],Xv(j)+wT+y]. (5)

Towards bounding each term in the sum, observe that the increasing chain property tells us that for each j[d], we can write Xv(j)+w=Y𝗉𝖺𝗌𝗍(j)+Y𝗇𝖾𝗐(j), where Y𝗉𝖺𝗌𝗍(j) contains Xk-s that participated in {Xv(i)+w}i<j, whereas Y𝗇𝖾𝗐(j) contains two new Xk-s. Now,

Pr[Xv(j)+wT+y|i[j1],Xv(i)+wT+y]=𝔼zY𝗉𝖺𝗌𝗍(j)[𝟏(z)Pr[Y𝗇𝖾𝗐(j)T+yz]], (6)

where 𝟏(z) is an indicator for whether past Xk-s landed in T+y, and yz is a fixed string that depends on the fixing of Y𝗉𝖺𝗌𝗍(j). Assume for simplicity that Y𝗇𝖾𝗐(j)=αX(1)+βX(2), where α,β𝔽q are nonzero. Then, using the fact that T is δ-mixing, each summand of Equation 5 can now be bounded by

j[d]Pr[Xv(j)+wT+y|i[j1],Xv(i)+wT+y]qnδd,

and summing over all y-s gives us

Pr[vS,XvT]qnqnδd=q(δd1)n.

Union-bounding over all S-s, we get

Pr[E](qbA+1)q(δd1)nqb(A+1)(δd1)n. (7)

First, note that we set parameters so that d2a+1δ. Indeed, we can set d=12logq(A+12b(q1)), and then need A to be at most, say, 4qq4aδq8aδ. Under this choice of A, it also holds that b(A+1)δd12n, since n is large enough. Overall, Equation 7 gives Pr[E]qδd12nqan, as desired. The “In particular” part simply follows from Remark 20.

We are now ready to give our list recovery result.

Theorem 25 (list recovery with erasures).

For any n, a prime q, an integer q1, and α,ε(0,1), the following holds. With probability at least 1qn, a random linear code 𝒞𝔽qn of rate

R=1α(1α)logqε

is (α,,L)-list-recoverable from erasures, with

L=Cq,,α1ε,

provided that nL. In particular, there exists a universal constant C such that:

  • When 23q, we can take Cq,,αqClogq((1α)+1)Cq,,α(0), and,

  • When =(1γ)q for some γ(0,1/3), we can take Cq,,α(Cq,,α(0))(1γ)213γ(1γ).

4 List-Recovery from Errors over Arbitrary Fields

We now turn to the case of list-recovery from errors. Unlike in the case of list-recovery from erasures, we will make no assumptions on the underlying field. We firstly show that list-recovery balls are non-trivially mixing. We subsequently sketch how, again using Lemma 22, we can conclude the desired list-recovery result. The proofs are deferred to the full version [3].

4.1 Mixture of List-Recovery Balls

Lemma 26.

Suppose q is a prime power, 1q is an integer and ω1,ω2(0,1/q). Let A,B,T𝔽q be of size , let X1Unif(A), X2Unif(B), Y1Unif(𝔽qA) and Y2Unif(𝔽qB) all independent. Then,

(1ω1) (1ω2)Pr[X1+X2T]+ω1(1ω2)Pr[X1+Y2T] (8)
+ω2(1ω1)Pr[Y1+X2T]+ω1ω2Pr[Y1+Y2T]
(1ω1)(1ω2)+ω1ω2q. (9)

In the sequel, we will need upper and lower bounds on the RHS of Lemma 26 to 1ρ. The following lemma establishes the required bounds.

Lemma 27.

Suppose q is a prime power, 1q is an integer and ρ(0,1/q). Suppose 0ηρ and ρηω1,ω2ρ. Then, the following inequalities hold:

1ρ >(1ρ)2+ρ2q, (10)
(1ρ+η)2 +(ρη)2q>(1ω1)(1ω2)+ω1ω2q. (11)

We can now state a lemma bounding the probability that a nontrivial linear combination of two uniform samples from a list-recovery ball lands in a shift of the list-recovery ball. Then, in Corollary 30, we show how to set parameters to turn this into a statement about the δ-mixing of a list-recovery ball.

Lemma 28.

Let n, q a prime power, 1q an integer, and let ρ(0,1/q). Let T1,,Tn𝔽q be subsets, each of size . Fix η>0 small enough so that

1ρ>(1ρ+η)2+(ρη)2q.

Let α,β𝔽q{0} and y𝔽qn, and let X,XBρ(T1××Tn). Then,

Pr[αX+βXy+Bρ(T1××Tn)]22nexpq(n(hq,(ρη)hq,(ρ)))+2nexpq(nDq(1ρ(1ρ+η)2+(ρη)2q)). (12)
 Remark 29.

Since Lemma 27 implies

1ρ>(1ρ)2+ρ2q0<ρ<1q,

it follows that one can indeed choose η small enough to ensure

1ρ>(1ρη)2+(ρ+η)2q.

In Corollary 30 we show how to choose η>0 to bound the two terms in Equation 12 by qδn for a concrete δ>0 (which depends on ρ,,q).

Corollary 30.

Let n, q a prime power, 1<q an integer, and let ρ(0,1/q). Let T1,,Tn𝔽q, each of size . The list-recovery ball Bρ(T1××Tn) is δ-mixing, for

δ=logq((q)(1ρ)ρ)ρ4(1/qρ)216logq

assuming n(logqρ(1/qρ))c for some universal constant c.

4.2 List Recovery from Errors

Having established that the list-recovery ball is δ-mixing, we will repeat roughly the same argument we had for list recovery with erasures, that uses Lemma 24.

Theorem 31 (list recovery with errors).

For every n, a prime power q, an integer <q, ρ(0,1/q), and ε(0,1), the following holds. With probability at least 1qn, a random linear code 𝒞𝔽qn of rate

R=1hq,(ρ)ε

is (ρ,,L)-list-recoverable from errors, with

L=Cρ,,q1ε,

provided that n is large enough. More concretely, there exists a universal constant C such that

Cρ,,qq(logqρ(1/qρ))C

provided that n(logqρ(1/qρ))C.

References

  • [1] Avraham Ben-Aroya, Dean Doron, and Amnon Ta-Shma. Near-optimal erasure list-decodable codes. In 35th Computational Complexity Conference (CCC), pages 1:1–1:27. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.CCC.2020.1.
  • [2] Yeyuan Chen and Zihan Zhang. Explicit folded Reed-Solomon and multiplicity codes achieve relaxed generalized Singleton bounds, 2024. doi:10.48550/arXiv.2408.15925.
  • [3] Dean Doron, Jonathan Mosheiff, Nicolas Resch, and João Ribeiro. List-recovery of random linear codes over small fields, 2025. doi:10.48550/arXiv.2505.05935.
  • [4] Dean Doron, Dana Moshkovitz, Justin Oh, and David Zuckerman. Nearly optimal pseudorandomness from hardness. Journal of the ACM, 69(6), November 2022. doi:10.1145/3555307.
  • [5] Dean Doron and Mary Wootters. High-probability list-recovery, and applications to heavy hitters. In 49th International Colloquium on Automata, Languages, and Programming (ICALP), pages 55:1–55:17. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.ICALP.2022.55.
  • [6] Peter Elias. List decoding for noisy channels. Technical Report Technical Report 335, Research Laboratory of Electronics, MIT, 1957. URL: https://dspace.mit.edu/handle/1721.1/4484.
  • [7] Anna C. Gilbert, Yi Li, Ely Porat, and Martin J. Strauss. For-all sparse recovery in near-optimal time. ACM Transactions on Algorithms, 13(3):1–26, 2017. doi:10.1145/3039872.
  • [8] Eitan Goldberg, Chong Shangguan, and Itzhak Tamo. Singleton-type bounds for list-decoding and list-recovery, and related results. In International Symposium on Information Theory (ISIT), pages 2565–2570. IEEE, 2022. doi:10.1109/ISIT50566.2022.9834849.
  • [9] Sivakanth Gopi, Swastik Kopparty, Rafael Oliveira, Noga Ron-Zewi, and Shubhangi Saraf. Locally testable and locally correctable codes approaching the Gilbert-Varshamov bound. IEEE Transactions on Information Theory, 64(8):5813–5831, 2018. doi:10.1109/TIT.2018.2809788.
  • [10] Zeyu Guo, Ray Li, Chong Shangguan, Itzhak Tamo, and Mary Wootters. Improved list-decodability and list-recoverability of Reed-Solomon codes via tree packings. In 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 708–719. IEEE, 2021. doi:10.1109/FOCS52979.2021.00074.
  • [11] Venkatesan Guruswami. List decoding from erasures: Bounds and code constructions. IEEE Transactions on Information Theory, 49(11):2826–2833, 2003. doi:10.1109/TIT.2003.815776.
  • [12] Venkatesan Guruswami, Johan Håstad, and Swastik Kopparty. On the list-decodability of random linear codes. IEEE Transactions on Information Theory, 57(2):718–725, 2011. Preliminary version at STOC 2010. doi:10.1109/TIT.2010.2095170.
  • [13] Venkatesan Guruswami and Piotr Indyk. Near-optimal linear-time codes for unique decoding and new list-decodable codes over smaller alphabets. In 34th Annual Symposium on Theory of Computing (STOC), pages 812–821. ACM, 2002. doi:10.1145/509907.510023.
  • [14] Venkatesan Guruswami and Piotr Indyk. Linear time encodable and list decodable codes. In 35th Annual Symposium on Theory of Computing (STOC), pages 126–135. ACM, 2003. doi:10.1145/780542.780562.
  • [15] Venkatesan Guruswami and Piotr Indyk. Efficiently decodable codes meeting Gilbert-Varshamov bound for low rates. In 15th Annual Symposium on Discrete Algorithms (SODA), pages 756–757. SIAM, 2004. URL: http://dl.acm.org/citation.cfm?id=982792.982907.
  • [16] Venkatesan Guruswami and Piotr Indyk. Linear-time encodable/decodable codes with near-optimal rate. IEEE Transactions on Information Theory, 51(10):3393–3400, 2005. doi:10.1109/TIT.2005.855587.
  • [17] Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Bounds for list-decoding and list-recovery of random linear codes. IEEE Transactions on Information Theory, 68(2):923–939, 2022. doi:10.1109/TIT.2021.3127126.
  • [18] Venkatesan Guruswami and Nicolas Resch. On the list-decodability of random linear rank-metric codes. In International Symposium on Information Theory (ISIT), pages 1505–1509. IEEE, 2018. doi:10.1109/ISIT.2018.8437698.
  • [19] Venkatesan Guruswami, Christopher Umans, and Salil P. Vadhan. Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes. Journal of the ACM, 56(4):20:1–20:34, 2009. doi:10.1145/1538902.1538904.
  • [20] Iftach Haitner, Yuval Ishai, Eran Omri, and Ronen Shaltiel. Parallel hashing via list recoverability. In Advances in Cryptology – CRYPTO 2015, pages 173–190. Springer Berlin Heidelberg, 2015. doi:10.1007/978-3-662-48000-7_9.
  • [21] Brett Hemenway, Noga Ron-Zewi, and Mary Wootters. Local list recovery of high-rate tensor codes and applications. SIAM Journal on Computing, 49(4):FOCS17–157, January 2020. doi:10.1137/17M116149X.
  • [22] Justin Holmgren, Alex Lombardi, and Ron D. Rothblum. Fiat–Shamir via list-recoverable codes (or: parallel repetition of GMW is not zero-knowledge). In 53rd Annual Symposium on Theory of Computing (STOC), pages 750–760. ACM, 2021. doi:10.1145/3406325.3451116.
  • [23] Piotr Indyk, Hung Q. Ngo, and Atri Rudra. Efficiently decodable non-adaptive group testing. In 21st Annual Symposium on Discrete Algorithms (SODA), pages 1126–1142. SIAM, 2010. doi:10.1137/1.9781611973075.91.
  • [24] Itay Kalev and Amnon Ta-Shma. Unbalanced expanders from multiplicity codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM), pages 12:1–12:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.APPROX/RANDOM.2022.12.
  • [25] Sergey Komech and Jonathan Mosheiff. Let’s have both! Optimal list-recoverability via alphabet permutation codes, 2025. doi:10.48550/arXiv.2502.05858.
  • [26] Swastik Kopparty, Or Meir, Noga Ron-Zewi, and Shubhangi Saraf. High-rate locally correctable and locally testable codes with sub-polynomial query complexity. Journal of the ACM, 64(2):1–42, 2017. doi:10.1145/3051093.
  • [27] Swastik Kopparty, Noga Ron-Zewi, Shubhangi Saraf, and Mary Wootters. Improved decoding of folded Reed-Solomon and multiplicity codes. In 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 212–223. IEEE, 2018. doi:10.1109/FOCS.2018.00029.
  • [28] Kasper Green Larsen, Jelani Nelson, Huy L. Nguyen, and Mikkel Thorup. Heavy hitters via cluster-preserving clustering. In 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 61–70. IEEE, 2016. doi:10.1109/FOCS.2016.16.
  • [29] Vsevolod F. Lev. Linear equations over 𝔽p and moments of exponential sums. Duke Mathematical Journal, 107(2):239–263, 2001. doi:10.1215/S0012-7094-01-10722-9.
  • [30] Matan Levi, Jonathan Mosheiff, and Nikhil Shagrithaya. Random Reed-Solomon codes and random linear codes are locally equivalent, 2024. arXiv:2406.02238.
  • [31] Ray Li and Nikhil Shagrithaya. Near-optimal list-recovery of linear code families, 2025. doi:10.48550/arXiv.2502.13877.
  • [32] Ben Lund and Aditya Potukuchi. On the list recoverability of randomly punctured codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM), pages 30:1–30:11. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.APPROX/RANDOM.2020.30.
  • [33] Jonathan Mosheiff, Nicolas Resch, Kuo Shang, and Chen Yuan. Randomness-efficient constructions of capacity-achieving list-decodable codes, 2024. doi:10.48550/arXiv.2402.11533.
  • [34] Hung Q. Ngo, Ely Porat, and Atri Rudra. Efficiently decodable error-correcting list disjunct matrices and applications. In International Colloquium on Automata, Languages, and Programming (ICALP), pages 557–568. Springer, 2011.
  • [35] Nicolas Resch. List-Decodable Codes: (Randomized) Constructions and Applications. PhD thesis, Carnegie Mellon University, 2020. URL: http://reports-archive.adm.cs.cmu.edu/anon/2020/abstracts/20-113.html.
  • [36] Atri Rudra and Mary Wootters. Every list-decodable code for high noise has abundant near-optimal rate puncturings. In 46th Annual Symposium on Theory of Computing (STOC), pages 764–773. ACM, 2014. doi:10.1145/2591796.2591797.
  • [37] Atri Rudra and Mary Wootters. Average-radius list-recoverability of random linear codes. In 29th Annual Symposium on Discrete Algorithms (SODA), pages 644–662. SIAM, 2018. doi:10.1137/1.9781611975031.42.
  • [38] Amnon Ta-Shma and David Zuckerman. Extractor codes. IEEE Transactions on Information Theory, 50(12):3015–3025, 2004. doi:10.1109/TIT.2004.838377.
  • [39] Itzhak Tamo. Tighter list-size bounds for list-decoding and recovery of folded Reed-Solomon and multiplicity codes. IEEE Transactions on Information Theory, 70(12):8659–8668, 2024. doi:10.1109/TIT.2024.3402171.
  • [40] Victor Vasilievich Zyablov and Mark Semenovich Pinsker. List concatenated decoding. Problemy Peredachi Informatsii, 17(4):29–33, 1981.