,
Rohit Chatterjee
,
Yunqi Li
,
Prashant Nalini Vasudevan
Creative Commons Attribution 4.0 International license
Prange’s information set algorithm is a well-known decoding algorithm for linear codes. It decodes corrupted codewords of most 𝔽₂-linear codes C of message length n up to relative error rate O(log n / n) in poly(n) time. We show that the error rate can be improved to O((log n)² / n), provided: (1) the decoder has access to a polynomial-length advice string that depends on C only, and (2) C is n^{-Ω(1)}-balanced.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of Hw, where the rows of H are a random sample of short dual codewords, measures the proximity of a received word w to the code in the regime of interest. Given such H as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding Hw for an arbitrary polynomial-size advice matrix H.
@InProceedings{bogdanov_et_al:LIPIcs.ITCS.2026.23,
author = {Bogdanov, Andrej and Chatterjee, Rohit and Li, Yunqi and Vasudevan, Prashant Nalini},
title = {{Decoding Balanced Linear Codes with Preprocessing}},
booktitle = {17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
pages = {23:1--23:23},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-410-9},
ISSN = {1868-8969},
year = {2026},
volume = {362},
editor = {Saraf, Shubhangi},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.23},
URN = {urn:nbn:de:0030-drops-253107},
doi = {10.4230/LIPIcs.ITCS.2026.23},
annote = {Keywords: Linear codes, nearest codeword problem, learning parity with noise}
}