Search Results

Documents authored by Li, Yunqi


Document
Decoding Balanced Linear Codes with Preprocessing

Authors: Andrej Bogdanov, Rohit Chatterjee, Yunqi Li, and Prashant Nalini Vasudevan

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
Prange’s information set algorithm is a well-known decoding algorithm for linear codes. It decodes corrupted codewords of most 𝔽₂-linear codes C of message length n up to relative error rate O(log n / n) in poly(n) time. We show that the error rate can be improved to O((log n)² / n), provided: (1) the decoder has access to a polynomial-length advice string that depends on C only, and (2) C is n^{-Ω(1)}-balanced. As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate. Our main technical result is that the Hamming weight of Hw, where the rows of H are a random sample of short dual codewords, measures the proximity of a received word w to the code in the regime of interest. Given such H as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding Hw for an arbitrary polynomial-size advice matrix H.

Cite as

Andrej Bogdanov, Rohit Chatterjee, Yunqi Li, and Prashant Nalini Vasudevan. Decoding Balanced Linear Codes with Preprocessing. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 23:1-23:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bogdanov_et_al:LIPIcs.ITCS.2026.23,
  author =	{Bogdanov, Andrej and Chatterjee, Rohit and Li, Yunqi and Vasudevan, Prashant Nalini},
  title =	{{Decoding Balanced Linear Codes with Preprocessing}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{23:1--23:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.23},
  URN =		{urn:nbn:de:0030-drops-253107},
  doi =		{10.4230/LIPIcs.ITCS.2026.23},
  annote =	{Keywords: Linear codes, nearest codeword problem, learning parity with noise}
}
Document
Hardness Amplification for Real-Valued Functions

Authors: Yunqi Li and Prashant Nalini Vasudevan

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
Given an integer-valued function f:{0,1}ⁿ → {0,1,… , m-1} that is mildly hard to compute on instances drawn from some distribution D over {0,1}ⁿ, we show that the function g(x_1, … , x_t) = f(x_1) + ⋯ + f(x_t) is strongly hard to compute on instances (x_1,… ,x_t) drawn from the product distribution D^t. We also show the same for the task of approximately computing real-valued functions f:{0,1}ⁿ → [0,m). Our theorems immediately imply hardness self-amplification for several natural problems including Max-Clique and Max-SAT, Approximate #SAT, Entropy Estimation, etc..

Cite as

Yunqi Li and Prashant Nalini Vasudevan. Hardness Amplification for Real-Valued Functions. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 2:1-2:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CCC.2025.2,
  author =	{Li, Yunqi and Vasudevan, Prashant Nalini},
  title =	{{Hardness Amplification for Real-Valued Functions}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{2:1--2:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.2},
  URN =		{urn:nbn:de:0030-drops-236967},
  doi =		{10.4230/LIPIcs.CCC.2025.2},
  annote =	{Keywords: Average-case complexity, hardness amplification}
}
Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail