Near-Optimal List-Recovery of Linear Code Families
Abstract
We prove several results on linear codes achieving list-recovery capacity. We show that random linear codes achieve list-recovery capacity with constant output list size (independent of the alphabet size and length). That is, over alphabets of size at least , random linear codes of rate are -list-recoverable for all and . Together with a result of Levi, Mosheiff, and Shagrithaya, this implies that randomly punctured Reed–Solomon codes also achieve list-recovery capacity. We also prove that our output list size is near-optimal among all linear codes: all -list-recoverable linear codes must have .
Our simple upper bound combines the Zyablov-Pinsker argument with recent bounds from Kopparty, Ron-Zewi, Saraf, Wootters, and Tamo on the maximum intersection of a “list-recovery ball” and a low-dimensional subspace with large distance. Our lower bound is inspired by a recent lower bound of Chen and Zhang.
Keywords and phrases:
Error-Correcting Codes, Randomness, List-Recovery, Reed-Solomon Codes, Random Linear CodesCategory:
RANDOMFunding:
Ray Li: Research supported by NSF grant CCF-2347371.Copyright and License:
2012 ACM Subject Classification:
Mathematics of computing Coding theory ; Theory of computation Pseudorandomness and derandomization ; Mathematics of computing CombinatoricsAcknowledgements:
The authors would like to thank Yeyuan Chen and Zihan Zhang for pointing out a mistake in Theorem 12 in an earlier version of the paper.Editors:
Alina Ene and Eshan ChattopadhyaySeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
In this work, we study list-recovery for random linear codes and random Reed–Solomon codes, proving near-optimal upper and lower bounds.
An (error correcting) code is a subset of for an alphabet , which, in this work, is always for some prime power . We study linear codes, which are subspaces of . We want codes to be large, meaning they have large rate . We also want codes to tolerate more errors. In the standard unique decoding setting, tolerating many errors means that, for any vector , there is at most one codeword that agrees with on many coordinates.
We study a generalization of the unique-decoding problem known as list-recovery. In list-recovery, we want that, for any combinatorial rectangle, there are few codewords that “agree” with this rectangle on many coordinates. Formally, a code is -list-recoverable if for any sets of size , there are at most codewords such that for at least a fraction of the coordinates. The special case of list-recoverability is the standard unique-decoding setting, and the special case of list-recoverability is known as list-decodability.
List-recovery has motivations in coding theory, complexity theory, and algorithms. In coding theory, list-recovery has been used as a tool to obtain efficient list-decoding algorithms [24, 22, 27, 34, 30]. Also, list-recoverable random-linear codes – which we study in this work – are used as a building block in other coding constructions [18, 31]. In complexity theory, list-recoverable codes find applications in constructions of other pseudorandom objects such as extractors [48] and condensers [26]. In algorithms, they are also useful primitives in group testing [32, 40] sparse recovery [12], and streaming algorithms [36, 8].
The list-recovery capacity theorem states that is the optimal tradeoff between the error radius and the code rate . (see e.g., [23, 42]). That is, below capacity , there exist -list-recoverable codes of rate , and above capacity , any -list-recoverable code must have exponential list size . The existence holds because uniformly random codes of rate (over sufficiently large alphabets ) are -list-recoverable with high probability.
We wish to understand what kinds of codes achieve list-recovery capacity. A number of explicit code constructions are known to achieve list-recovery capacity, including Folded Reed–Solomon codes, Multiplicity codes, Folded Algebraic–Geometry codes. Additional techniques – subspace evasive sets, subspace designs, and expander techniques – can be used to improve the output list-size and alphabet size [22, 34, 27, 28, 29, 31, 30, 19, 9, 35, 49] (see Table 1 in [35], see also [47, 5] for even tighter list size bounds in the special case of list-decoding).
Still, several fundamental questions remain open.
-
1.
First, how list-recoverable is a random linear code? A random linear code is a random subspace of . All explicit constructions are based on linear codes (though many are only linear over a subfield), so it is natural to wonder about list-recovery of a “typical” linear code. As list-recovery is a pseudorandom property, this question also addresses the deeper geometric question of “how similar is a random subspace to a random set over ?”, which is well-studied in the more specific context of list-decoding [51, 10, 17, 7, 50, 43, 44, 45, 38, 20, 1].
-
2.
Second, how list-recoverable are Reed–Solomon codes? The above constructions all generalize the Reed–Solomon code, the most fundamental polynomial evaluation code. Can Reed–Solomon codes themselves achieve list-recovery capacity? Given recent progress that showed the special case that Reed–Solomon codes achieve list-decoding capacity [4, 15, 1], this general case of list-recovery has been an obvious and tantalizing open question.
-
3.
Lastly, is there a fundamental separation between linear and nonlinear codes for list-recovery? On one hand, there is no apparent separation for the special case of list-decoding, where random linear codes are list-decodable to capacity with list-size [17, 50, 38, 20], just like uniformly random codes. On the other hand, uniformly random codes are list-recoverable with list size , but all known linear constructions require output list size at least , and this lower bound has been proven in various specific settings [20, 37, 5].
We answer all three questions. We show that random linear codes are list-recoverable to capacity with provably near-optimal output list size. By a recent result of [37], this implies that randomly punctured Reed–Solomon codes are list-recoverable to capacity with near-optimal output list size. Lastly, we prove a fundamental separation between linear and non-linear codes by showing that all linear codes of rate must have list-size at least .
1.1 Our results
We now state our results in the context of prior work.
| Citation | Radius | input list size | output list size |
|---|---|---|---|
| [51, 16] | |||
| [45] | |||
| This work |
List recovery for Random Linear Codes
Several known arguments show that random linear codes achieve list-recovery capacity. A random linear code is a code generated by a uniformly random generator matrix . First, the Zyablov-Pinsker argument [51] adapted to list-recovery shows that random linear codes of rate over alphabet are -list-recoverable (see, for example [16, Lemma 9.6]). Rudra and Wootters [45] improved the output list size to , showing random linear codes of rate over alphabet are -list-recoverable. We improve the output list size to be independent of the alphabet size .
Theorem 1 (Theorem 8, Informal).
For all , and a random linear code of rate is -list-recoverable with high probability.
Our list size improves on the prior bounds when , which covers most alphabet sizes ( is needed to achieve list-recovery capacity), and the improvement is more significant when is larger. This improvement to an alphabet-independent output list size is critical for Theorem 2 below (see Remark 3). As we show in Theorem 5, this output list size is near optimal among all linear codes.
Our proof is simple, combining the Zyablov–Pinsker [51] argument with recent analyses of the list-recovery of explicit constructions like Folded Reed–Solomon codes. In particular, the Zyablov–Pinsker argument [51] shows that a random linear code can be list-recovered so that, with high probability the output list always lies in a subspace of dimension at most . Naively, this implies an output list size bound of . However, recent analyses of list-recovering explicit codes [35, 49] showed that subspaces of dimension with good distance – random linear codes are well known to have good distance with high probability – can have at most points inside an -list-recovery ball, thus giving our improved output list size. We also show that we get the best possible output list size for our proof technique, in the sense that, for any linear code, there are output lists that span a subspace of dimension at least (see Proposition 13).
We believe that the simplicity of our proof is a strong indicator that we have found the right way to approach the problem, which had previously resisted various other proof techniques.
List recovery for Random Reed-Solomon Codes
Reed–Solomon codes [41] are the most fundamental evaluation codes. A Reed–Solomon code is given by evaluation points in a finite field , and a degree , and is defined as
List-decoding and list-recovery of Reed–Solomon codes are well-studied questions. The seminal Guruswami–Sudan [24] algorithm showed that Reed–Solomon codes are list-decodable and list-recoverable up to the Johnson radius [33, 25]. Since then, there has been much interest in determining whether Reed–Solomon codes are list-decodable and list-recoverable beyond the Johnson bound, and perhaps even up to capacity (the capacity is for both list-decoding and list-recovery). Initially, there was evidence against this possibility [21, 6, 2], suggesting that Reed–Solomon codes could not be list-decoded or list-recovered much beyond the Johnson bound. Since then, an exciting line of work has shown, to contrary, that Reed–Solomon codes can beat the Johnson bound for list-decoding [43, 46, 11, 13, 14, 4, 15, 1], and, in fact, can be list-decoded up to capacity [4, 15, 1]. All of these works studied randomly punctured Reed–Solomon codes, where are chosen at random from a larger field .
Despite the exciting progress for list-decoding, there has been comparatively little progress on list-recovery. Lund and Potukuchi [39] and Guo, Li, Shangguan, Tamo, and Wootters [14] proved that (randomly punctured) Reed–Solomon codes are list-recoverable beyond the Johnson bound in the low-rate regime: [39] shows -list-recovery for , and rate , and [14] shows -list-recovery for rate Reed–Solomon codes. Both improve on the Johnson radius of in the low rate setting.
In [37], Levi, Mosheiff and Shagrithaya showed that random Reed–Solomon codes and random linear codes are locally equivalent, meaning that both random code families achieve identical rate thresholds for all “local properties”, which include (the complements of) list-decoding and list-recovery. Thus, our result for list-recovery of random linear codes transfers to random Reed–Solomon codes as well.
Theorem 2 (Theorem 11, Informal).
For all , a randomly punctured Reed–Solomon code of length over alphabet size , of rate is
-list-recoverable with high probability.
We note that the problem of determining optimal list sizes for random Reed–Solomon codes across all rates has proven resistant to a variety of previous approaches. The simplicity of our proof suggests that the method presented here may offer a promising direction for completely resolving this question.
Remark 3.
Remark 4.
A fruitful line of work [22, 34, 27, 9, 35, 49] has culminated in output list sizes of for various explicit list-recoverable codes such as Folded Reed–Solomon codes and Multiplicity codes. This list size is better than our list size of by roughly a factor of in the exponent. However, our results are still interesting because, as described above, the list-recovery of random linear codes and Reed–Solomon codes are fundamental questions, and also because our results yield linear codes for list-recovery and use much smaller alphabet sizes.
Lower bounds for list-recovery
We now discuss impossibility results for list-recovery. An early impossibility result of Guruswami and Rudra in [21] showed that, in the setting of zero-error list-recovery (), many full length Reed–Solomon codes of rate require in order to have output list size, so many full length Reed–Solomon codes cannot be list-recovered beyond the Johnson bound – note this does not contradict Theorem 2, as we consider randomly punctured, as opposed to full length () codes. More recently it was shown that achieving list-recovery capacity requires exponential list size for particular codes: random linear codes in the high-rate zero-error () regime [20], random linear codes in general parameter settings [37], and for Reed–Solomon codes, Folded Reed–Solomon codes, and Multiplicity codes in general parameter settings [5].
Inspired by the lower bound in [5], we show that any linear code list-recoverable to capacity must have output list size at least .
Theorem 5 (Theorem 12, Informal).
Over any field, any linear code of rate that is list-recoverable must satisfy .
One takeaway from Theorem 5 is that our list sizes of in Theorem 1 and Theorem 2 are near-optimal. Additionally, Doron and Wootters [8] asked whether there were explicit list-recoverable codes with, among other desired guarantees, output list size . Our result shows this is not possible with any linear code. Lastly, our lower bound shows separation between non-linear and linear codes for list-recovery, which is perhaps surprising given that no such separation exists for list-decoding.
2 Preliminaries
For a prime power , let be the finite field of order . Let denote the set . For a given vector space , let denote the set of all subspaces of . For a given set , let denote the power set of . For a vector , let denote its th entry.
A code is said to be linear if it is a linear subspace, and said to have rate if . We say has relative distance if , where denotes the number of non-zero entries in the codeword . A matrix containing linearly independent columns is said to be the generator matrix of if every codeword can be constructed using some linear combinations of the columns in . is said to contain a set of vectors if for every .
For a vector and sets , the agreement set is defined as:
A -radius -list-recovery ball is given by input lists of size , and is defined to be
| (1) |
We can alternatively define -list-recovery using the above definition: a code is -list-recoverable if every -radius -list-recovery ball contains at most codewords.
For , a random linear code (RLC) of rate is a linear code whose generator matrix is a matrix whose entries are chosen uniformly at random from , independently of one another. For , we use to denote the Reed–Solomon (RS) code of rate obtained by evaluating polynomials of degree on evaluation points . We say this is a random RS code if the evaluation points have been chosen uniformly at random and independently of one another111This is different from the usual model for random RS codes, where it is required that the random evaluation points be distinct. However, it can be shown that both models behave similarly (refer to [37], Appendix A for details)..
2.1 Local Coordinate-Wise Linear (LCL) Properties
We now introduce the machinery in [37] that connects random linear codes to (randomly punctured) Reed–Solomon codes. A code property for codes of block length in is simply a family of codes in . We say that a code satisfies if . Denoting , we say that an infinite family of codes satisfies if for every . In this paper, we focus on local, monotone-increasing code properties. A local code property, informally speaking, is defined by the inclusion of some bad set. A monotone-increasing code property is one for which the following is true: if satisfies , then every for which holds, also satisfies . An example of local, monotone-increasing code property is the complement of -list-decodability, defined as the family of all codes that contain at least one set of distinct vectors, all lying within a Hamming ball of radius .
For a locality parameter , an ordered tuple of subspaces , where for each is defined to be a -local profile. Note that . We say that a matrix is contained in if the th row of A belongs to , for all . A code is said to contain if
-
(a)
there exists a matrix with distinct columns such that the set of columns of is contained in , and
-
(b)
is contained in .
For a family of -local profiles , where , we define a -local coordinate wise linear (-LCL) property as follows:
The complement of -list-recoverability is a -LCL property. This is proven in [37, Proposition 2.2], but we provide a justification in this paragraph. Every bad set of vectors lying within a given -radius -list-recovery ball agrees with some input lists at a lot of coordinates. This implies that the vectors agree with one another at a lot of coordinates as well, and once we arrange the bad vector sets as columns in a matrix of dimension , we can specify these agreements as linear constraints on the rows of such matrices. Formally, the property is defined by a family of -local profiles that we now describe. For every , we define by describing the -local profiles that constitute it. Let denote the collection of all -length tuples where each element is a subset of of size exactly . Furthermore, let denote the set of all matrices of dimension having elements in 222Even though and depend on , we have suppressed this dependence in the notation for sake of clarity.. Then, for every , every , define such that for every ,
Note that each is a subspace of , and therefore is a valid linear profile. We can now define the associated family of linear profiles for the complement of -list-recoverability:
Observe that
| (2) |
In the same work, the authors also prove a threshold theorem for random linear codes (RLCs) in relation to all LCL properties, and moreover, gave a complete characterization of the rate threshold. Informally, the theorem says that RLCs of a sufficiently large alphabet exhibit a sharp threshold phenomenon for all LCL properties. That is, for every LCL property , there exists a rate threshold such that RLCs of rate satisfy with high probability, and RLCs of rate do not satisfy with high probability.
Theorem 6 ([37], Theorem 3.1).
Let be a -LCL property of codes in and let be a corresponding family of profiles. Let be an RLC of rate R. Then, there is some threshold rate for which the following holds.
-
1.
If then .
-
2.
If then .
-
3.
In particular, if and then .
The concept of LCL properties allows for “transfer type” theorems between random linear codes and random RS codes. In more detail, for every reasonable LCL property (that is, for every LCL property whose corresponding family of profiles is large), the rate thresholds for random linear codes and random RS codes are the same. That is, any rate threshold proved for LCL properties of RLCs also applies for random RS codes, and vice versa. For our purposes, we only require one part of this result, which we formally state below.
Theorem 7 ([37], Theorem 3.10 (part 1) (Threshold theorem for RS codes)).
Let be a -LCL property of codes in , with associated local profile family and (random linear code) threshold rate . Let and let , and are sampled independently and uniformly from . Assume that . Fix . If , then
| (3) |
3 List-recovery of Random Linear Codes
In this section, we prove Theorem 1, that random linear codes achieve list-recovery capacity with constant output list size. Formally, we show the following.
Theorem 8.
Fix , so that , , and let be a prime power such that . Let be an RLC of rate . Then with probability at least , is -list-recoverable with satisfying .
The theorem follows as a consequence of two lemmas. We first state both lemmas, and then give the proof of Theorem 8 using them. The first lemma essentially states that any low dimensional subspace with good distance has few points in a list-recovery ball. This lemma appears in [35, 49]; we state the version from [49, Lemma 3.1].
Lemma 9 ([49], see also [35]).
For and , let be a linear code with relative distance that is -list-recoverable. Assume further that any output list is contained in a subspace of dimension . Then the output list size .
The second lemma uses the Zyablov–Pinsker argument [51], showing that a random linear code does not have too many linearly independent codewords within a list-recovery ball.
Lemma 10.
Fix , so that , , and let be a prime power such that . Let be an RLC of rate . Then with probability at least , for every input lists of size , the maximal linearly independent subset of within the radius -list-recovery ball has size less than .
Proof of Lemma 10.
Denote and . We also assume is a multiple of for simplicity of exposition, and note that the result holds in the general case as well. We show that an RLC “avoids” all bad configurations with high probability. A bad configuration is a set of linearly independent vectors of size such that there exist input lists of size , so that . We say that contains a bad configuration if for every , is also in . If this condition is not satisfied, then we say that does not contain . It is easy to see that if contains no bad configurations, then the maximal linearly independent subset of within any -list-recovery ball of radius has size less than . Therefore we show that the probability of containing a bad configuration is low.
Fix input lists of size , and let be the corresponding radius -list-recovery ball. The size of is . The probability that a particular configuration is bad is equal to the probability of the encodings of some linearly independent messages being inside simultaneously, which is . By a union bound over at most possible input lists and all -sized linearly independent subsets of the message vectors in (there are at most such subsets), we have
| (4) | ||||
In Equation 4, we used , and for the last inequality, we used . This implies that the probability with which does not contain any bad configuration is at least .
We now prove Theorem 8.
Proof of Theorem 8.
Denote . Denote by the event that for a RLC of rate , the maximal linearly independent subset of within every radius -list-recovery ball has size less than . By Lemma 10, we know that happens with probability at least . Let denote the event that a rate RLC has distance at least . By the Gilbert-Varshamov bound (see [23], Section 4.2), and because of the fact that , happens with probability at least . Therefore we have
When and occur simultaneously, the assumptions of Lemma 9 are satisfied by with and , and therefore we see that
where is ranging over all radius -list-recovery balls, and we are done.
4 List-Recovery of Reed–Solomon codes
In this section, we will prove the following result, which says that random Reed–Solomon codes are list-recoverable to capacity with constant output list size. The proof combines Theorem 8 from the previous section with Theorem 6, the equivalence theorem from [37].
Corollary 11.
Fix , so that , . Fix a constant such that , and denote . Let be a constant, and let be a prime power satisfying . Then, a random RS code of rate over is -list-recoverable with probability at least .
Proof of Corollary 11.
Denote and . Let be the -LCL property of not being -list-recoverable, and let be the corresponding (random linear code) threshold rate. By Theorem 6, part 1 [37], we know that if is an RLC of rate , then the following holds for every constant :
According to Theorem 8, a rate RLC (having a sufficiently large alphabet size) satisfies only with probability at most . Therefore, for every , and so .
We will now work with random RS codes having rate slightly less than . Define , take to be large enough so that . Note that . Define , where are sampled independently and uniformly from . Upon denoting to be the local profile family associated with property , we see that the hypothesis of Theorem 7 [37] is satisfied, and therefore, Equation 3 is satisfied.
5 Any linear code needs output list-size
We now prove our lower bounds for list-recovery, that any linear code list-recoverable to capacity needs output list size .
Theorem 12.
Let , be a positive integer, and be sufficiently large. Let be a linear code of rate . If is -list-recoverable, then .
Proof.
Let be the dimension of the code. Let . Let . By Gaussian elimination and permuting rows and columns, we may, without loss of generality write the generator matrix of as
| (5) |
where each is a length vector. For , let denote the columns. By rank-nullity, there exist vectors such that is a linear combination of such that is not supported on indices (there are vectors and indices). Now let . Restricted to indices in , vectors have pairwise disjoint supports: within indices , for , vector is supported on , and vector is supported on .
Now fix arbitrary distinct values . Consider the output list to be all linear combinations of with coefficients from . The fact that the vectors have pairwise disjoint supports on implies (i) the vectors are linearly independent, and so all vectors in are distinct, and (ii) the vectors in can only take on one of values at any index in . Therefore, we can choose input lists , each of size such that all codewords in agree with all of .
Choosing the rest of the input lists arbitrarily, we see that if this code is list-recoverable with radius , then the list size satisfies 333, where we used that .
We also show that our Zyablov-Pinsker type argument in Theorem 1 (Lemma 10) is tight, in the sense that any linear code must have linearly independent codewords in a list-recovery ball.
Proposition 13.
Let , be a positive integer, and be sufficiently large. Let be a linear code of rate . Then there exists a radius -list-recovery ball that contains at least linearly independent elements of .
Proof.
Upon writing the generator matrix of in the same form as described above in the proof of Theorem 12, consider the first columns of the generator matrix. Denote these linearly independent column vectors by . Create input lists each of size that contain and , but are otherwise arbitrary. Then create lists of size , each containing elements that are evenly distributed so that they agree equally with each of . Thus, each of agrees with on the first , and on at least of the remaining coordinates. Therefore, these vectors lie inside a -radius -list-recovery ball around , as desired.
6 Concluding remarks
We showed that random linear codes and Reed–Solomon codes are list-recoverable to capacity with near-optimal output-list size. Several open questions remain.
-
1.
What is the optimal output-list size for random linear codes and Reed–Solomon codes? There is a gap between our upper bound of and the lower bound of . We surmise that the correct answer is closer to the lower bound.
- 2.
-
3.
Our alphabet size for list-recovering Reed–Solomon codes (Theorem 2) is optimal in that it is linear in , but the constant is double-exponential in . By contrast, for list-decoding, the best known alphabet size for achieving capacity has an exponential-type constant, [1]. Can our alphabet size be improved?
References
- [1] Omar Alrabiah, Venkatesan Guruswami, and Ray Li. Randomly punctured reed-solomon codes achieve list-decoding capacity over linear-sized fields. In Bojan Mohar, Igor Shinkar, and Ryan O’Donnell, editors, Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, Vancouver, BC, Canada, June 24-28, 2024, pages 1458–1469. ACM, 2024. doi:10.1145/3618260.3649634.
- [2] Eli Ben-Sasson, Swastik Kopparty, and Jaikumar Radhakrishnan. Subspace Polynomials and Limits to List Decoding of Reed-Solomon Codes. IEEE Trans. Inform. Theory, 56(1):113–120, January 2010. doi:10.1109/TIT.2009.2034780.
- [3] Joshua Brakensiek, Manik Dhar, Sivakanth Gopi, and Zihan Zhang. AG codes achieve list decoding capacity over contant-sized fields. CoRR, abs/2310.12898, 2023. doi:10.48550/arXiv.2310.12898.
- [4] Joshua Brakensiek, Sivakanth Gopi, and Visu Makam. Generic reed-solomon codes achieve list-decoding capacity. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 1488–1501. ACM, 2023. doi:10.1145/3564246.3585128.
- [5] Yeyuan Chen and Zihan Zhang. Explicit folded reed-solomon and multiplicity codes achieve relaxed generalized singleton bounds. In Michal Koucký and Nikhil Bansal, editors, Proceedings of the 57th Annual ACM Symposium on Theory of Computing, STOC 2025, Prague, Czechia, June 23-27, 2025, pages 1–12. ACM, 2025. doi:10.1145/3717823.3718114.
- [6] Qi Cheng and Daqing Wan. On the List and Bounded Distance Decodability of Reed-Solomon Codes. SIAM J. Comput., 37(1):195–209, April 2007. Place: Philadelphia, PA, USA Publisher: Society for Industrial and Applied Mathematics. doi:10.1137/S0097539705447335.
- [7] Mahdi Cheraghchi, Venkatesan Guruswami, and Ameya Velingker. Restricted isometry of Fourier matrices and list decodability of random linear codes. SIAM J. Comput., 42(5):1888–1914, 2013. doi:10.1137/120896773.
- [8] Dean Doron and Mary Wootters. High-Probability List-Recovery, and Applications to Heavy Hitters. In Electron. Colloquium Comput. Complex., volume 27, page 162, 2020.
- [9] Zeev Dvir and Shachar Lovett. Subspace evasive sets. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 351–358, 2012. doi:10.1145/2213977.2214010.
- [10] Peter Elias. Error-correcting codes for list decoding. IEEE Transactions on Information Theory, 37(1):5–12, 1991. Publisher: IEEE. doi:10.1109/18.61123.
- [11] Asaf Ferber, Matthew Kwan, and Lisa Sauermann. List-decodability with large radius for Reed-Solomon codes. IEEE Transactions on Information Theory, 68(6):3823–3828, 2022. Publisher: IEEE. doi:10.1109/TIT.2022.3148779.
- [12] Anna C Gilbert, Hung Q Ngo, Ely Porat, Atri Rudra, and Martin J Strauss. l2/l2-foreach sparse recovery with low risk. In International Colloquium on Automata, Languages, and Programming, pages 461–472. Springer, 2013. doi:10.1007/978-3-642-39206-1_39.
- [13] Eitan Goldberg, Chong Shangguan, and Itzhak Tamo. Singleton-type bounds for list-decoding and list-recovery, and related results. In 2022 IEEE International Symposium on Information Theory (ISIT), pages 2565–2570. IEEE, 2022. doi:10.1109/ISIT50566.2022.9834849.
- [14] Zeyu Guo, Ray Li, Chong Shangguan, Itzhak Tamo, and Mary Wootters. Improved list-decodability and list-recoverability of Reed-Solomon codes via tree packings. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 708–719. IEEE, 2022.
- [15] Zeyu Guo and Zihan Zhang. Randomly punctured reed-solomon codes achieve the list decoding capacity over polynomial-size alphabets. In 64th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2023, Santa Cruz, CA, USA, November 6-9, 2023, pages 164–176. IEEE, 2023. doi:10.1109/FOCS57990.2023.00019.
- [16] Venkatesan Guruswami. List Decoding of Error-Correcting Codes (Winning Thesis of the 2002 ACM Doctoral Dissertation Competition), volume 3282 of Lecture Notes in Computer Science. Springer, 2004. doi:10.1007/B104335.
- [17] Venkatesan Guruswami, Johan Håstad, and Swastik Kopparty. On the List-Decodability of Random Linear Codes. IEEE Trans. Inform. Theory, 57(2):718–725, February 2011. doi:10.1109/TIT.2010.2095170.
- [18] Venkatesan Guruswami and Piotr Indyk. Expander-based constructions of efficiently decodable codes. In Proceedings 42nd IEEE Symposium on Foundations of Computer Science, pages 658–667. IEEE, 2001. doi:10.1109/SFCS.2001.959942.
- [19] Venkatesan Guruswami and Swastik Kopparty. Explicit subspace designs. Combinatorica, 36(2):161–185, 2016. Publisher: Springer. doi:10.1007/S00493-014-3169-1.
- [20] Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Bounds for list-decoding and list-recovery of random linear codes. IEEE Transactions on Information Theory, 68(2):923–939, 2021. Publisher: IEEE. doi:10.1109/TIT.2021.3127126.
- [21] Venkatesan Guruswami and Atri Rudra. Limits to List Decoding Reed–Solomon Codes. IEEE Trans. Inform. Theory, 52(8):3642–3649, August 2006. Place: Piscataway, NJ, USA Publisher: IEEE Press. doi:10.1109/TIT.2006.878164.
- [22] Venkatesan Guruswami and Atri Rudra. Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy. IEEE Transactions on Information Theory, 54(1):135–150, 2008. Publisher: IEEE. doi:10.1109/TIT.2007.911222.
- [23] Venkatesan Guruswami, Atri Rudra, and Madhu Sudan. Essential coding theory, 2022. Draft available at https://cse.buffalo.edu/faculty/atri/courses/coding-theory/book/prev-ver/web-coding-book-Jan31-2022.pdf.
- [24] Venkatesan Guruswami and Madhu Sudan. Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes. IEEE Transactions on Information Theory, 45(6):1757–1767, 1999. doi:10.1109/18.782097.
- [25] Venkatesan Guruswami and Madhu Sudan. Extensions to the Johnson bound. Manuscript, February, 2001.
- [26] Venkatesan Guruswami, Christopher Umans, and Salil P. Vadhan. Unbalanced expanders and randomness extractors from parvaresh-vardy codes. J. ACM, 56(4):20:1–20:34, 2009. doi:10.1145/1538902.1538904.
- [27] Venkatesan Guruswami and Carol Wang. Linear-algebraic list decoding for variants of Reed–Solomon codes. IEEE Transactions on Information Theory, 59(6):3257–3268, 2013. Publisher: IEEE. doi:10.1109/TIT.2013.2246813.
- [28] Venkatesan Guruswami and Chaoping Xing. Folded codes from function field towers and improved optimal rate list decoding. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 339–350, 2012. doi:10.1145/2213977.2214009.
- [29] Venkatesan Guruswami and Chaoping Xing. List decoding Reed-Solomon, Algebraic-Geometric, and Gabidulin subcodes up to the Singleton bound. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 843–852, 2013. doi:10.1145/2488608.2488715.
- [30] Brett Hemenway, Noga Ron-Zewi, and Mary Wootters. Local list recovery of high-rate tensor codes and applications. SIAM Journal on Computing, 49(4):FOCS17–157, 2019. Publisher: SIAM. doi:10.1137/17M116149X.
- [31] Brett Hemenway and Mary Wootters. Linear-time list recovery of high-rate expander codes. Information and Computation, 261:202–218, 2018. Publisher: Elsevier. doi:10.1016/J.IC.2018.02.004.
- [32] Piotr Indyk, Hung Q Ngo, and Atri Rudra. Efficiently decodable non-adaptive group testing. In Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms, pages 1126–1142. SIAM, 2010. doi:10.1137/1.9781611973075.91.
- [33] Selmer Johnson. A new upper bound for error-correcting codes. IRE Transactions on Information Theory, 8(3):203–207, 1962. Publisher: IEEE. doi:10.1109/TIT.1962.1057714.
- [34] Swastik Kopparty. List-decoding multiplicity codes. Theory of Computing, 11(1):149–182, 2015. Publisher: Theory of Computing Exchange. doi:10.4086/TOC.2015.V011A005.
- [35] Swastik Kopparty, Noga Ron-Zewi, Shubhangi Saraf Saraf, and Mary Wootters. Improved decoding of folded Reed-Solomon and multiplicity codes. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 212–223. IEEE, 2018.
- [36] Kasper Green Larsen, Jelani Nelson, Huy L Nguyen, and Mikkel Thorup. Heavy hitters via cluster-preserving clustering. Communications of the ACM, 62(8):95–100, 2019. doi:10.1145/3339185.
- [37] Matan Levi, Jonathan Mosheiff, and Nikhil Shagrithaya. Random Reed-Solomon Codes and Random Linear Codes are Locally Equivalent, November 2024. arXiv:2406.02238. doi:10.48550/arXiv.2406.02238.
- [38] Ray Li and Mary Wootters. Improved list-decodability of random linear binary codes. IEEE Transactions on Information Theory, 67(3):1522–1536, 2020. Publisher: IEEE. doi:10.1109/TIT.2020.3041650.
- [39] Ben Lund and Aditya Potukuchi. On the List Recoverability of Randomly Punctured Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020), volume 176, pages 30:1–30:11, 2020. doi:10.4230/LIPIcs.APPROX/RANDOM.2020.30.
- [40] Hung Q Ngo, Ely Porat, and Atri Rudra. Efficiently decodable error-correcting list disjunct matrices and applications. In Automata, Languages and Programming: 38th International Colloquium, ICALP 2011, Zurich, Switzerland, July 4-8, 2011, Proceedings, Part I 38, pages 557–568. Springer, 2011.
- [41] I. S. Reed and G. Solomon. Polynomial codes over certain finite fields. Journal of the Society for Industrial and Applied Mathematics, 8(2):300–304, 1960. doi:10.1137/0108018.
- [42] Nicolas Resch. List-decodable codes:(randomized) constructions and applications, 2020.
- [43] Atri Rudra and Mary Wootters. Every list-decodable code for high noise has abundant near-optimal rate puncturings. In David B. Shmoys, editor, Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 764–773. ACM, 2014. doi:10.1145/2591796.2591797.
- [44] Atri Rudra and Mary Wootters. It’ll probably work out: Improved list-decoding through random operations. In Tim Roughgarden, editor, Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, ITCS 2015, Rehovot, Israel, January 11-13, 2015, pages 287–296. ACM, 2015. doi:10.1145/2688073.2688092.
- [45] Atri Rudra and Mary Wootters. Average-radius list-recoverability of random linear codes. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 644–662. SIAM, 2018. doi:10.1137/1.9781611975031.42.
- [46] Chong Shangguan and Itzhak Tamo. Combinatorial list-decoding of Reed-Solomon codes beyond the Johnson radius. In Proceedings of the 52nd Annual ACM Symposium on Theory of Computing, STOC 2020, pages 538–551, 2020. doi:10.1145/3357713.3384295.
- [47] Shashank Srivastava. Improved list size for folded reed-solomon codes. In Yossi Azar and Debmalya Panigrahi, editors, Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2025, New Orleans, LA, USA, January 12-15, 2025, pages 2040–2050. SIAM, 2025. doi:10.1137/1.9781611978322.64.
- [48] Amnon Ta-Shma and David Zuckerman. Extractor codes. IEEE Trans. Inf. Theory, 50(12):3015–3025, 2004. doi:10.1109/TIT.2004.838377.
- [49] Itzhak Tamo. Tighter List-Size Bounds for List-Decoding and Recovery of Folded Reed-Solomon and Multiplicity Codes, December 2023. arXiv:2312.17097. doi:10.48550/arXiv.2312.17097.
- [50] Mary Wootters. On the List Decodability of Random Linear Codes with Large Error Rates. In Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pages 853–860, New York, NY, USA, 2013. ACM. event-place: Palo Alto, California, USA. doi:10.1145/2488608.2488716.
- [51] Victor Vasilievich Zyablov and Mark Semenovich Pinsker. List concatenated decoding. Problemy Peredachi Informatsii, 17(4):29–33, 1981. Russian Academy of Sciences.
