Generalized List Decoding

Authors Yihan Zhang , Amitalok J. Budkuley , Sidharth Jaggi



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2020.51.pdf
  • Filesize: 1.2 MB
  • 83 pages

Document Identifiers

Author Details

Yihan Zhang
  • Department of Information Engineering, The Chinese University of Hong Kong
Amitalok J. Budkuley
  • Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, India
Sidharth Jaggi
  • Department of Information Engineering, The Chinese University of Hong Kong

Acknowledgements

We thank Andrej Bogdanov who provided an elegant reduction from general $L$ to $L=2$ for the proof of the asymmetric case of the converse (Lemma 68) and rederived Blinovsky’s [Blinovsky, 1986] characterization of the Plotkin point P_{L-1} for (p,L-1)-list decoding via a conceptually cleaner proof (Sec. 16), despite that he generously declined to co-author this paper. We also thank him for inspiring discussions in the early stage and helpful comments near the end of this work. Part of this work was done while YZ was visiting the Simons Institute for the Theory of Computing for the Summer Cluster: Error-Correcting Codes and High-Dimensional Expansion, and AJB was at the Department of Information Engineering, the Chinese University of Hong Kong. This work was partially supported by GRF grants 14301519 and 14313116.

Cite As Get BibTex

Yihan Zhang, Amitalok J. Budkuley, and Sidharth Jaggi. Generalized List Decoding. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 51:1-51:83, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020) https://doi.org/10.4230/LIPIcs.ITCS.2020.51

Abstract

This paper concerns itself with the question of list decoding for general adversarial channels, e.g., bit-flip (XOR) channels, erasure channels, AND (Z-) channels, OR channels, real adder channels, noisy typewriter channels, etc. We precisely characterize when exponential-sized (or positive rate) (L-1)-list decodable codes (where the list size L is a universal constant) exist for such channels. Our criterion essentially asserts that:
For any given general adversarial channel, it is possible to construct positive rate (L-1)-list decodable codes if and only if the set of completely positive tensors of order-L with admissible marginals is not entirely contained in the order-L confusability set associated to the channel.
The sufficiency is shown via random code construction (combined with expurgation or time-sharing). The necessity is shown by 
1) extracting approximately equicoupled subcodes (generalization of equidistant codes) from any using hypergraph Ramsey’s theorem, and 
2) significantly extending the classic Plotkin bound in coding theory to list decoding for general channels using duality between the completely positive tensor cone and the copositive tensor cone. 
 In the proof, we also obtain a new fact regarding asymmetry of joint distributions, which may be of independent interest.
Other results include 
1) List decoding capacity with asymptotically large L for general adversarial channels; 
2) A tight list size bound for most constant composition codes (generalization of constant weight codes); 
3) Rederivation and demystification of Blinovsky’s [Blinovsky, 1986] characterization of the list decoding Plotkin points (threshold at which large codes are impossible) for bit-flip channels; 
4) Evaluation of general bounds [Wang et al., 2019] for unique decoding in the error correction code setting.

Subject Classification

ACM Subject Classification
  • Mathematics of computing → Coding theory
  • Mathematics of computing → Information theory
Keywords
  • Generalized Plotkin bound
  • general adversarial channels
  • equicoupled codes
  • random coding
  • completely positive tensors
  • copositive tensors
  • hypergraph Ramsey theory

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Rudolf Ahlswede. Channels with arbitrarily varying channel probability functions in the presence of noiseless feedback. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 25(3):239-252, 1973. Google Scholar
  2. Noga Alon, Boris Bukh, and Yury Polyanskiy. List-decodable zero-rate codes. IEEE Transactions on Information Theory, 65(3):1657-1667, 2018. Google Scholar
  3. Alexei Ashikhmin, Alexander Barg, and Simon Litsyn. A new upper bound on codes decodable into size-2 lists. In Numbers, Information and Complexity, pages 239-244. Springer, 2000. Google Scholar
  4. L. A. Bassalygo. New upper boundes for error-correcting codes. Problems of Information Transmisson, 1:32-35, 1965. Google Scholar
  5. Avraham Ben-Aroya, Dean Doron, and Amnon Ta-Shma. Near-Optimal Erasure List-Decodable Codes, 2018. URL: https://eccc.weizmann.ac.il/report/2018/065/.
  6. Elwyn R Berlekamp. Block coding with noiseless feedback. PhD thesis, Massachusetts Institute of Technology, 1964. Google Scholar
  7. Abhishek Bhowmick and Shachar Lovett. List decoding Reed-Muller codes over small fields. arXiv preprint, 2014. URL: http://arxiv.org/abs/1407.3433.
  8. David Blackwell, Leo Breiman, and A. J. Thomasian. The Capacity of a Class of Channels. Ann. of Mathematical Statistics, 30(4):1229-1241, 1959. Google Scholar
  9. Vladimir M Blinovsky. Bounds for codes in the case of list decoding of finite volume. Problems of Information Transmission, 22:7-19, 1986. Google Scholar
  10. Vladimir M Blinovsky. Code bounds for multiple packings over a nonbinary finite alphabet. Problems of Information Transmission, 41:23-32, 2005. Google Scholar
  11. Vladimir M Blinovsky. On the convexity of one coding-theory function. Problems of Information Transmission, 44:34-39, 2008. Google Scholar
  12. Boris Bukh, Venkatesan Guruswami, and Johan Håstad. An improved bound on the fraction of correctable deletions. IEEE Transactions on Information Theory, 63(1):93-103, 2016. Google Scholar
  13. Z. Chen, S. Jaggi, and M.Langberg. A Characterization of the Capacity of Online (causal) Binary Channels. In Proc. ACM Symp. on Discrete Algorithms (SODA), Portland, U.S.A, June 2015. Google Scholar
  14. Sean Clark. How to show ∑_i = 1^⌈ L/2⌉ binom(2i-2,i-1)/i 2^-2i = 1/2-2^-L-1binom(L,(L-1)/2)? Mathematics Stack Exchange, February 2019. URL: https://math.stackexchange.com/q/3101258 (version: 2019-02-05).
  15. G. D. Cohen, S. N. Litsyn, and G. Zemor. Upper bounds on generalized distances. IEEE transactions on Information Theory, 40:2090-2092, November 1994. Google Scholar
  16. Imre Csiszár and János Körner. Information theory: coding theorems for discrete memoryless systems. Cambridge University Press, 2011. Google Scholar
  17. Imre Csiszár and Prakash Narayan. The Capacity of the Arbitrarily Varying Channel Revisited : Positivity, Constraints. IEEE transactions on Information Theory, 34:181-193, 1988. Google Scholar
  18. Philippe Delsarte. An algebraic approach to the association schemes of coding theory. Technical report, Philips Research Laboratories, 1973. Google Scholar
  19. Ilias Diakonikolas, Daniel M Kane, and Alistair Stewart. List-decodable robust mean estimation and learning mixtures of spherical gaussians. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1047-1060. ACM, 2018. Google Scholar
  20. Dean Doron, Dana Moshkovitz, Justin Oh, and David Zuckerman. Nearly Optimal Pseudorandomness From Hardness. ECCC preprint TR19-099, 2019. Google Scholar
  21. Peter Elias. List decoding for noisy channels. Technical report, Research Laboratory of Electronics, Massachusetts Institute of Technology, 1957. Google Scholar
  22. Edgar N Gilbert. A comparison of signalling alphabets. The Bell system technical journal, 31(3):504-522, 1952. Google Scholar
  23. Patrick Groetzner and Mirjam Dür. A factorization method for completely positive matrices. preprint, 2018. Google Scholar
  24. V. Guruswami. List Decoding of Error Correcting Codes (Lecture Notes in Computer Science),. Springer-Verlag, NY, 2004. Google Scholar
  25. Venkatesan Guruswami. List decoding in average-case complexity and pseudorandomness. In 2006 IEEE Information Theory Workshop-ITW'06 Punta del Este, pages 32-36. IEEE, 2006. Google Scholar
  26. Venkatesan Guruswami and Srivatsan Narayanan. Combinatorial limitations of average-radius list decoding. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 591-606. Springer, 2013. Google Scholar
  27. Christopher J Hillar and Lek-Heng Lim. Most tensor problems are NP-hard. Journal of the ACM (JACM), 60(6):45, 2013. Google Scholar
  28. Sushrut Karmalkar, Pravesh Kothari, and Adam Klivans. List-Decodable Linear Regression. arXiv preprint, 2019. URL: http://arxiv.org/abs/1905.05679.
  29. A. Lapidoth and P. Narayan. Reliable Communication under Channel Uncertainty. IEEE transactions on Information Theory, 44:2148-2177, 1998. Google Scholar
  30. Jessie MacWilliams. A theorem on the distribution of weights in a systematic code. Bell System Technical Journal, 42(1):79-94, 1963. Google Scholar
  31. R. J. McEliece, E. R. Rodemich, H. Jr. Rumsey, and L. R. Welch. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities. IEEE transactions on Information Theory, 23, 1977. Google Scholar
  32. Michael Navon and Alex Samorodnitsky. Linear programming bounds for codes via a covering argument. Discrete & Computational Geometry, 41(2):199, 2009. Google Scholar
  33. Morris Plotkin. Binary codes with specified minimum distance. IRE Transactions on Information Theory, 6(4):445-450, 1960. Google Scholar
  34. Yury Polyanskiy. Upper bound on list-decoding radius of binary codes. IEEE Transactions on Information Theory, 62(3):1119-1128, 2016. Google Scholar
  35. Prasad Raghavendra and Morris Yau. List Decodable Learning via Sum of Squares. arXiv preprint, 2019. URL: http://arxiv.org/abs/1905.04660.
  36. Atri Rudra and Mary Wootters. Every list-decodable code for high noise has abundant near-optimal rate puncturings. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, 2014. Google Scholar
  37. Atri Rudra and Mary Wootters. It'll probably work out: improved list-decoding through random operations. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, 2015. Google Scholar
  38. Atri Rudra and Mary Wootters. Average-radius list-recoverability of random linear codes. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, 2018. Google Scholar
  39. Anand Sarwate. Robust and adaptive communication under uncertain interference. PhD thesis, University of California, Berkeley, 2008. Google Scholar
  40. Michael Sipser and Daniel A Spielman. Expander codes. IEEE transactions on Information Theory, 42(6):1710-1722, 1996. Google Scholar
  41. Amnon Ta-Shma. Explicit, almost optimal, epsilon-balanced codes. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 2017. Google Scholar
  42. RR Varshamov. Estimate of the number of signals in error correcting codes. Docklady Akad. Nauk, SSSR, 117:739-741, 1957. Google Scholar
  43. Xishi (Nicholas) Wang, Amitalok J. Budkuley, Andrej Bogdanov, and Sidharth Jaggi. When are large codes possible for AVCs? In IEEE International Symposium on Information Theory (ISIT), Paris, pages 632-636. IEEE, 2019. Google Scholar
  44. L. R. Welch, R. J. McEliece, and H. Jr. Rumsey. A low-rate improvement on the Elias bound. IEEE transactions on Information Theory, 23, 1974. Google Scholar
  45. Mary Wootters. On the list decodability of random linear codes with large error rates. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, 2013. Google Scholar
  46. John M Wozencraft. List decoding. Quarterly Progress Report, 48:90-95, 1958. Google Scholar
  47. Victor Vasilievich Zyablov and Mark Semenovich Pinsker. List concatenated decoding. Problemy Peredachi Informatsii, 17(4):29-33, 1981. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail