26 Search Results for "Kopparty, Swastik"


Document
A Technique for Hardness Amplification Against AC⁰

Authors: William M. Hoza

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We study hardness amplification in the context of two well-known "moderate" average-case hardness results for AC⁰ circuits. First, we investigate the extent to which AC⁰ circuits of depth d can approximate AC⁰ circuits of some larger depth d + k. The case k = 1 is resolved by Håstad, Rossman, Servedio, and Tan’s celebrated average-case depth hierarchy theorem (JACM 2017). Our contribution is a significantly stronger correlation bound when k ≥ 3. Specifically, we show that there exists a linear-size AC⁰_{d + k} circuit h : {0, 1}ⁿ → {0, 1} such that for every AC⁰_d circuit g, either g has size exp(n^{Ω(1/d)}), or else g agrees with h on at most a (1/2 + ε)-fraction of inputs where ε = exp(-(1/d) ⋅ Ω(log n)^{k-1}). For comparison, Håstad, Rossman, Servedio, and Tan’s result has ε = n^{-Θ(1/d)}. Second, we consider the majority function. It is well known that the majority function is moderately hard for AC⁰ circuits (and stronger classes). Our contribution is a stronger correlation bound for the XOR of t copies of the n-bit majority function, denoted MAJ_n^{⊕ t}. We show that if g is an AC⁰_d circuit of size S, then g agrees with MAJ_n^{⊕ t} on at most a (1/2 + ε)-fraction of inputs, where ε = (O(log S)^{d - 1} / √n)^t. To prove these results, we develop a hardness amplification technique that is tailored to a specific type of circuit lower bound proof. In particular, one way to show that a function h is moderately hard for AC⁰ circuits is to (a) design some distribution over random restrictions or random projections, (b) show that AC⁰ circuits simplify to shallow decision trees under these restrictions/projections, and finally (c) show that after applying the restriction/projection, h is moderately hard for shallow decision trees with respect to an appropriate distribution. We show that (roughly speaking) if h can be proven to be moderately hard by a proof with that structure, then XORing multiple copies of h amplifies its hardness. Our analysis involves a new kind of XOR lemma for decision trees, which might be of independent interest.

Cite as

William M. Hoza. A Technique for Hardness Amplification Against AC⁰. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hoza:LIPIcs.CCC.2024.1,
  author =	{Hoza, William M.},
  title =	{{A Technique for Hardness Amplification Against AC⁰}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.1},
  URN =		{urn:nbn:de:0030-drops-203977},
  doi =		{10.4230/LIPIcs.CCC.2024.1},
  annote =	{Keywords: Bounded-depth circuits, average-case lower bounds, hardness amplification, XOR lemmas}
}
Document
Explicit Time and Space Efficient Encoders Exist Only with Random Access

Authors: Joshua Cook and Dana Moshkovitz

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We give the first explicit constant rate, constant relative distance, linear codes with an encoder that runs in time n^{1 + o(1)} and space polylog(n) provided random access to the message. Prior to this work, the only such codes were non-explicit, for instance repeat accumulate codes [Divsalar et al., 1998] and the codes described in [Gál et al., 2013]. To construct our codes, we also give explicit, efficiently invertible, lossless condensers with constant entropy gap and polylogarithmic seed length. In contrast to encoders with random access to the message, we show that encoders with sequential access to the message can not run in almost linear time and polylogarithmic space. Our notion of sequential access is much stronger than streaming access.

Cite as

Joshua Cook and Dana Moshkovitz. Explicit Time and Space Efficient Encoders Exist Only with Random Access. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 5:1-5:54, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cook_et_al:LIPIcs.CCC.2024.5,
  author =	{Cook, Joshua and Moshkovitz, Dana},
  title =	{{Explicit Time and Space Efficient Encoders Exist Only with Random Access}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{5:1--5:54},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.5},
  URN =		{urn:nbn:de:0030-drops-204015},
  doi =		{10.4230/LIPIcs.CCC.2024.5},
  annote =	{Keywords: Time-Space Trade Offs, Error Correcting Codes, Encoders, Explicit Constructions, Streaming Lower Bounds, Sequential Access, Time-Space Lower Bounds, Lossless Condensers, Invertible Condensers, Condensers}
}
Document
Asymptotically-Good RLCCs with (log n)^(2+o(1)) Queries

Authors: Gil Cohen and Tal Yankovitz

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Recently, Kumar and Mon reached a significant milestone by constructing asymptotically good relaxed locally correctable codes (RLCCs) with poly-logarithmic query complexity. Specifically, they constructed n-bit RLCCs with O(log^{69} n) queries. Their construction relies on a clever reduction to locally testable codes (LTCs), capitalizing on recent breakthrough works in LTCs. As for lower bounds, Gur and Lachish (SICOMP 2021) proved that any asymptotically-good RLCC must make Ω̃(√{log n}) queries. Hence emerges the intriguing question regarding the identity of the least value 1/2 ≤ e ≤ 69 for which asymptotically-good RLCCs with query complexity (log n)^{e+o(1)} exist. In this work, we make substantial progress in narrowing the gap by devising asymptotically-good RLCCs with a query complexity of (log n)^{2+o(1)}. The key insight driving our work lies in recognizing that the strong guarantee of local testability overshoots the requirements for the Kumar-Mon reduction. In particular, we prove that we can replace the LTCs by "vanilla" expander codes which indeed have the necessary property: local testability in the code’s vicinity.

Cite as

Gil Cohen and Tal Yankovitz. Asymptotically-Good RLCCs with (log n)^(2+o(1)) Queries. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 8:1-8:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.CCC.2024.8,
  author =	{Cohen, Gil and Yankovitz, Tal},
  title =	{{Asymptotically-Good RLCCs with (log n)^(2+o(1)) Queries}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.8},
  URN =		{urn:nbn:de:0030-drops-204045},
  doi =		{10.4230/LIPIcs.CCC.2024.8},
  annote =	{Keywords: Relaxed locally decodable codes, Relxaed locally correctable codes, RLCC, RLDC}
}
Document
Explicit Directional Affine Extractors and Improved Hardness for Linear Branching Programs

Authors: Xin Li and Yan Zhong

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Affine extractors give some of the best-known lower bounds for various computational models, such as AC⁰ circuits, parity decision trees, and general Boolean circuits. However, they are not known to give strong lower bounds for read-once branching programs (ROBPs). In a recent work, Gryaznov, Pudlák, and Talebanfard (CCC' 22) introduced a stronger version of affine extractors known as directional affine extractors, together with a generalization of ROBPs where each node can make linear queries, and showed that the former implies strong lower bound for a certain type of the latter known as strongly read-once linear branching programs (SROLBPs). Their main result gives explicit constructions of directional affine extractors for entropy k > 2n/3, which implies average-case complexity 2^{n/3-o(n)} against SROLBPs with exponentially small correlation. A follow-up work by Chattopadhyay and Liao (CCC' 23) improves the hardness to 2^{n-o(n)} at the price of increasing the correlation to polynomially large, via a new connection to sumset extractors introduced by Chattopadhyay and Li (STOC' 16) and explicit constructions of such extractors by Chattopadhyay and Liao (STOC' 22). Both works left open the questions of better constructions of directional affine extractors and improved average-case complexity against SROLBPs in the regime of small correlation. This paper provides a much more in-depth study of directional affine extractors, SROLBPs, and ROBPs. Our main results include: - An explicit construction of directional affine extractors with k = o(n) and exponentially small error, which gives average-case complexity 2^{n-o(n)} against SROLBPs with exponentially small correlation, thus answering the two open questions raised in previous works. - An explicit function in AC⁰ that gives average-case complexity 2^{(1-δ)n} against ROBPs with negligible correlation, for any constant δ > 0. Previously, no such average-case hardness is known, and the best size lower bound for any function in AC⁰ against ROBPs is 2^Ω(n). One of the key ingredients in our constructions is a new linear somewhere condenser for affine sources, which is based on dimension expanders. The condenser also leads to an unconditional improvement of the entropy requirement of explicit affine extractors with negligible error. We further show that the condenser also works for general weak random sources, under the Polynomial Freiman-Ruzsa Theorem in 𝖥₂ⁿ, recently proved by Gowers, Green, Manners, and Tao (arXiv' 23).

Cite as

Xin Li and Yan Zhong. Explicit Directional Affine Extractors and Improved Hardness for Linear Branching Programs. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CCC.2024.10,
  author =	{Li, Xin and Zhong, Yan},
  title =	{{Explicit Directional Affine Extractors and Improved Hardness for Linear Branching Programs}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.10},
  URN =		{urn:nbn:de:0030-drops-204060},
  doi =		{10.4230/LIPIcs.CCC.2024.10},
  annote =	{Keywords: Randomness Extractors, Affine, Read-once Linear Branching Programs, Low-degree polynomials, AC⁰ circuits}
}
Document
Pseudorandomness, Symmetry, Smoothing: I

Authors: Harm Derksen, Peter Ivanov, Chin Ho Lee, and Emanuele Viola

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We prove several new results about bounded uniform and small-bias distributions. A main message is that, small-bias, even perturbed with noise, does not fool several classes of tests better than bounded uniformity. We prove this for threshold tests, small-space algorithms, and small-depth circuits. In particular, we obtain small-bias distributions that - achieve an optimal lower bound on their statistical distance to any bounded-uniform distribution. This closes a line of research initiated by Alon, Goldreich, and Mansour in 2003, and improves on a result by O'Donnell and Zhao. - have heavier tail mass than the uniform distribution. This answers a question posed by several researchers including Bun and Steinke. - rule out a popular paradigm for constructing pseudorandom generators, originating in a 1989 work by Ajtai and Wigderson. This again answers a question raised by several researchers. For branching programs, our result matches a bound by Forbes and Kelley. Our small-bias distributions above are symmetric. We show that the xor of any two symmetric small-bias distributions fools any bounded function. Hence our examples cannot be extended to the xor of two small-bias distributions, another popular paradigm whose power remains unknown. We also generalize and simplify the proof of a result of Bazzi.

Cite as

Harm Derksen, Peter Ivanov, Chin Ho Lee, and Emanuele Viola. Pseudorandomness, Symmetry, Smoothing: I. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 18:1-18:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{derksen_et_al:LIPIcs.CCC.2024.18,
  author =	{Derksen, Harm and Ivanov, Peter and Lee, Chin Ho and Viola, Emanuele},
  title =	{{Pseudorandomness, Symmetry, Smoothing: I}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{18:1--18:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.18},
  URN =		{urn:nbn:de:0030-drops-204144},
  doi =		{10.4230/LIPIcs.CCC.2024.18},
  annote =	{Keywords: pseudorandomness, k-wise uniform distributions, small-bias distributions, noise, symmetric tests, thresholds, Krawtchouk polynomials}
}
Document
On the Degree of Polynomials Computing Square Roots Mod p

Authors: Kiran S. Kedlaya and Swastik Kopparty

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
For an odd prime p, we say f(X) ∈ F_p[X] computes square roots in F_p if, for all nonzero perfect squares a ∈ F_p, we have f(a)² = a. When p ≡ 3 mod 4, it is well known that f(X) = X^{(p+1)/4} computes square roots. This degree is surprisingly low (and in fact lowest possible), since we have specified (p-1)/2 evaluations (up to sign) of the polynomial f(X). On the other hand, for p ≡ 1 mod 4 there was previously no nontrivial bound known on the lowest degree of a polynomial computing square roots in F_p. We show that for all p ≡ 1 mod 4, the degree of a polynomial computing square roots has degree at least p/3. Our main new ingredient is a general lemma which may be of independent interest: powers of a low degree polynomial cannot have too many consecutive zero coefficients. The proof method also yields a robust version: any polynomial that computes square roots for 99% of the squares also has degree almost p/3. In the other direction, Agou, Deliglése, and Nicolas [Agou et al., 2003] showed that for infinitely many p ≡ 1 mod 4, the degree of a polynomial computing square roots can be as small as 3p/8.

Cite as

Kiran S. Kedlaya and Swastik Kopparty. On the Degree of Polynomials Computing Square Roots Mod p. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kedlaya_et_al:LIPIcs.CCC.2024.25,
  author =	{Kedlaya, Kiran S. and Kopparty, Swastik},
  title =	{{On the Degree of Polynomials Computing Square Roots Mod p}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.25},
  URN =		{urn:nbn:de:0030-drops-204219},
  doi =		{10.4230/LIPIcs.CCC.2024.25},
  annote =	{Keywords: Algebraic Computation, Polynomials, Computing Square roots, Reed-Solomon Codes}
}
Document
Low-Depth Algebraic Circuit Lower Bounds over Any Field

Authors: Michael A. Forbes

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
The recent breakthrough of Limaye, Srinivasan and Tavenas [Limaye et al., 2022] (LST) gave the first super-polynomial lower bounds against low-depth algebraic circuits, for any field of zero (or sufficiently large) characteristic. It was an open question to extend this result to small-characteristic ([Limaye et al., 2022; Govindasamy et al., 2022; Fournier et al., 2023]), which in particular is relevant for an approach to prove superpolynomial AC⁰[p]-Frege lower bounds ([Govindasamy et al., 2022]). In this work, we prove super-polynomial algebraic circuit lower bounds against low-depth algebraic circuits over any field, with the same parameters as LST (or even matching the improved parameters of Bhargav, Dutta, and Saxena [Bhargav et al., 2022]). We give two proofs. The first is logical, showing that even though the proof of LST naively fails in small characteristic, the proof is sufficiently algebraic that generic transfer results imply the result over characteristic zero implies the result over all fields. Motivated by this indirect proof, we then proceed to give a second constructive proof, replacing the field-dependent set-multilinearization result of LST with a set-multilinearization that works over any field, by using the Binet-Minc identity [Minc, 1979].

Cite as

Michael A. Forbes. Low-Depth Algebraic Circuit Lower Bounds over Any Field. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 31:1-31:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{forbes:LIPIcs.CCC.2024.31,
  author =	{Forbes, Michael A.},
  title =	{{Low-Depth Algebraic Circuit Lower Bounds over Any Field}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{31:1--31:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.31},
  URN =		{urn:nbn:de:0030-drops-204271},
  doi =		{10.4230/LIPIcs.CCC.2024.31},
  annote =	{Keywords: algebraic circuits, lower bounds, low-depth circuits, positive characteristic}
}
Document
Track A: Algorithms, Complexity and Games
Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error

Authors: Guy Goldberg

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Relaxed locally decodable codes (RLDCs) are error-correcting codes in which individual bits of the message can be recovered by querying only a few bits from a noisy codeword. For uncorrupted codewords, and for every bit, the decoder must decode the bit correctly with high probability. However, for a noisy codeword, a relaxed local decoder is allowed to output a "rejection" symbol, indicating that the decoding failed. We study the power of adaptivity and two-sided error for RLDCs. Our main result is that if the underlying code is linear, adaptivity and two-sided error do not give any power to relaxed local decoding. We construct a reduction from adaptive, two-sided error relaxed local decoders to non-adaptive, one-sided error ones. That is, the reduction produces a relaxed local decoder that never errs or rejects if its input is a valid codeword and makes queries based on its internal randomness (and the requested index to decode), independently of the input. The reduction essentially maintains the query complexity, requiring at most one additional query. For any input, the decoder’s error probability increases at most two-fold. Furthermore, assuming the underlying code is in systematic form, where the original message is embedded as the first bits of its encoding, the reduction also conserves both the code itself and its rate and distance properties We base the reduction on our new notion of additive promise problems. A promise problem is additive if the sum of any two YES-instances is a YES-instance and the sum of any NO-instance and a YES-instance is a NO-instance. This novel framework captures both linear RLDCs and property testing (of linear properties), despite their significant differences. We prove that in general, algorithms for any additive promise problem do not gain power from adaptivity or two-sided error, and obtain the result for RLDCs as a special case. The result also holds for relaxed locally correctable codes (RLCCs), where a codeword bit should be recovered. As an application, we improve the best known lower bound for linear adaptive RLDCs. Specifically, we prove that such codes require block length of n ≥ k^{1+Ω(1/q²)}, where k denotes the message length and q denotes the number of queries.

Cite as

Guy Goldberg. Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 74:1-74:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{goldberg:LIPIcs.ICALP.2024.74,
  author =	{Goldberg, Guy},
  title =	{{Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{74:1--74:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.74},
  URN =		{urn:nbn:de:0030-drops-202174},
  doi =		{10.4230/LIPIcs.ICALP.2024.74},
  annote =	{Keywords: Locally decodable codes, Relaxed locally correctable codes, Relaxed locally decodable codes}
}
Document
Track A: Algorithms, Complexity and Games
Two-Source and Affine Non-Malleable Extractors for Small Entropy

Authors: Xin Li and Yan Zhong

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Non-malleable extractors are generalizations and strengthening of standard randomness extractors, that are resilient to adversarial tampering. Such extractors have wide applications in cryptography and have become important cornerstones in recent breakthroughs of explicit constructions of two-source extractors and affine extractors for small entropy. However, explicit constructions of non-malleable extractors appear to be much harder than standard extractors. Indeed, in the well-studied models of two-source and affine non-malleable extractors, the previous best constructions only work for entropy rate > 2/3 and 1-γ for some small constant γ > 0 respectively by Li (FOCS' 23). In this paper, we present explicit constructions of two-source and affine non-malleable extractors that match the state-of-the-art constructions of standard ones for small entropy. Our main results include: - Two-source and affine non-malleable extractors (over 𝖥₂) for sources on n bits with min-entropy k ≥ log^C n and polynomially small error, matching the parameters of standard extractors by Chattopadhyay and Zuckerman (STOC' 16, Annals of Mathematics' 19) and Li (FOCS' 16). - Two-source and affine non-malleable extractors (over 𝖥₂) for sources on n bits with min-entropy k = O(log n) and constant error, matching the parameters of standard extractors by Li (FOCS' 23). Our constructions significantly improve previous results, and the parameters (entropy requirement and error) are the best possible without first improving the constructions of standard extractors. In addition, our improved affine non-malleable extractors give strong lower bounds for a certain kind of read-once linear branching programs, recently introduced by Gryaznov, Pudlák, and Talebanfard (CCC' 22) as a generalization of several well studied computational models. These bounds match the previously best-known average-case hardness results given by Chattopadhyay and Liao (CCC' 23) and Li (FOCS' 23), where the branching program size lower bounds are close to optimal, but the explicit functions we use here are different. Our results also suggest a possible deeper connection between non-malleable extractors and standard ones.

Cite as

Xin Li and Yan Zhong. Two-Source and Affine Non-Malleable Extractors for Small Entropy. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 108:1-108:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2024.108,
  author =	{Li, Xin and Zhong, Yan},
  title =	{{Two-Source and Affine Non-Malleable Extractors for Small Entropy}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{108:1--108:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.108},
  URN =		{urn:nbn:de:0030-drops-202512},
  doi =		{10.4230/LIPIcs.ICALP.2024.108},
  annote =	{Keywords: Randomness Extractors, Non-malleable, Two-source, Affine}
}
Document
RANDOM
Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes

Authors: Huck Bennett and Chris Peikert

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
We give a simple proof that the (approximate, decisional) Shortest Vector Problem is NP-hard under a randomized reduction. Specifically, we show that for any p ≥ 1 and any constant γ < 2^{1/p}, the γ-approximate problem in the 𝓁_p norm (γ-GapSVP_p) is not in RP unless NP ⊆ RP. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of γ-GapSVP_p using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known NP-hardness results for GapSVP_p with p < ∞, our reduction uses randomness. Indeed, it is a notorious open problem to prove NP-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Transactions on Information Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding n-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an O(√log n) factor of Minkowski’s bound. This asymptotically matches the best known distance for decoding near Minkowski’s bound, due to Mook and Peikert (IEEE Transactions on Information Theory 2022), whose work we build on with a somewhat simpler construction and analysis.

Cite as

Huck Bennett and Chris Peikert. Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.APPROX/RANDOM.2023.37,
  author =	{Bennett, Huck and Peikert, Chris},
  title =	{{Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.37},
  URN =		{urn:nbn:de:0030-drops-188622},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.37},
  annote =	{Keywords: Lattices, Shortest Vector Problem, Reed-Solomon codes, NP-hardness, derandomization}
}
Document
RANDOM
Extracting Mergers and Projections of Partitions

Authors: Swastik Kopparty and Vishvajeet N

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
We study the problem of extracting randomness from somewhere-random sources, and related combinatorial phenomena: partition analogues of Shearer’s lemma on projections. A somewhere-random source is a tuple (X_1, …, X_t) of (possibly correlated) {0,1}ⁿ-valued random variables X_i where for some unknown i ∈ [t], X_i is guaranteed to be uniformly distributed. An extracting merger is a seeded device that takes a somewhere-random source as input and outputs nearly uniform random bits. We study the seed-length needed for extracting mergers with constant t and constant error. Since a somewhere-random source has min-entropy at least n, a standard extractor can also serve as an extracting merger. Our goal is to understand whether the further structure of being somewhere-random rather than just having high entropy enables smaller seed-length, and towards this we show: - Just like in the case of standard extractors, seedless extracting mergers with even just one output bit do not exist. - Unlike the case of standard extractors, it is possible to have extracting mergers that output a constant number of bits using only constant seed. Furthermore, a random choice of merger does not work for this purpose! - Nevertheless, just like in the case of standard extractors, an extracting merger which gets most of the entropy out (namely, having Ω(n) output bits) must have Ω(log n) seed. This is the main technical result of our work, and is proved by a second-moment strengthening of the graph-theoretic approach of Radhakrishnan and Ta-Shma to extractors. All this is in contrast to the status for condensing mergers (where the output is only required to have high min-entropy), whose seed-length/output-length tradeoffs can all be fully explained by using standard condensers. Inspired by such considerations, we also formulate a new and basic class of problems in combinatorics: partition analogues of Shearer’s lemma. We show basic results in this direction; in particular, we prove that in any partition of the 3-dimensional cube [0,1]³ into two parts, one of the parts has an axis parallel 2-dimensional projection of area at least 3/4.

Cite as

Swastik Kopparty and Vishvajeet N. Extracting Mergers and Projections of Partitions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 52:1-52:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kopparty_et_al:LIPIcs.APPROX/RANDOM.2023.52,
  author =	{Kopparty, Swastik and N, Vishvajeet},
  title =	{{Extracting Mergers and Projections of Partitions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{52:1--52:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.52},
  URN =		{urn:nbn:de:0030-drops-188777},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.52},
  annote =	{Keywords: randomness extractors, randomness mergers, extracting mergers, partitions, projections of partitions, covers, projections of covers}
}
Document
RANDOM
Unbalanced Expanders from Multiplicity Codes

Authors: Itay Kalev and Amnon Ta-Shma

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
In 2007 Guruswami, Umans and Vadhan gave an explicit construction of a lossless condenser based on Parvaresh-Vardy codes. This lossless condenser is a basic building block in many constructions, and, in particular, is behind the state of the art extractor constructions. We give an alternative construction that is based on Multiplicity codes. While the bottom-line result is similar to the GUV result, the analysis is very different. In GUV (and Parvaresh-Vardy codes) the polynomial ring is closed to a finite field, and every polynomial is associated with related elements in the finite field. In our construction a polynomial from the polynomial ring is associated with its iterated derivatives. Our analysis boils down to solving a differential equation over a finite field, and uses previous techniques, introduced by Kopparty (in [Swastik Kopparty, 2015]) for the list-decoding setting. We also observe that these (and more general) questions were studied in differential algebra, and we use the terminology and result developed there. We believe these techniques have the potential of getting better constructions and solving the current bottlenecks in the area.

Cite as

Itay Kalev and Amnon Ta-Shma. Unbalanced Expanders from Multiplicity Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kalev_et_al:LIPIcs.APPROX/RANDOM.2022.12,
  author =	{Kalev, Itay and Ta-Shma, Amnon},
  title =	{{Unbalanced Expanders from Multiplicity Codes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.12},
  URN =		{urn:nbn:de:0030-drops-171346},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.12},
  annote =	{Keywords: Condensers, Multiplicity codes, Differential equations}
}
Document
Linear Branching Programs and Directional Affine Extractors

Authors: Svyatoslav Gryaznov, Pavel Pudlák, and Navid Talebanfard

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
A natural model of read-once linear branching programs is a branching program where queries are 𝔽₂ linear forms, and along each path, the queries are linearly independent. We consider two restrictions of this model, which we call weakly and strongly read-once, both generalizing standard read-once branching programs and parity decision trees. Our main results are as follows. - Average-case complexity. We define a pseudo-random class of functions which we call directional affine extractors, and show that these functions are hard on average for the strongly read-once model. We then present an explicit construction of such function with good parameters. This strengthens the result of Cohen and Shinkar (ITCS'16) who gave such average-case hardness for parity decision trees. Directional affine extractors are stronger than the more familiar class of affine extractors. Given the significance of these functions, we expect that our new class of functions might be of independent interest. - Proof complexity. We also consider the proof system Res[⊕], which is an extension of resolution with linear queries, and define the regular variant of Res[⊕]. A refutation of a CNF in this proof system naturally defines a linear branching program solving the corresponding search problem. If a refutation is regular, we prove that the resulting program is read-once. Conversely, we show that a weakly read-once linear BP solving the search problem can be converted to a regular Res[⊕] refutation with constant blow up, where the regularity condition comes from the definition of weakly read-once BPs, thus obtaining the equivalence between these proof systems.

Cite as

Svyatoslav Gryaznov, Pavel Pudlák, and Navid Talebanfard. Linear Branching Programs and Directional Affine Extractors. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 4:1-4:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gryaznov_et_al:LIPIcs.CCC.2022.4,
  author =	{Gryaznov, Svyatoslav and Pudl\'{a}k, Pavel and Talebanfard, Navid},
  title =	{{Linear Branching Programs and Directional Affine Extractors}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{4:1--4:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.4},
  URN =		{urn:nbn:de:0030-drops-165664},
  doi =		{10.4230/LIPIcs.CCC.2022.4},
  annote =	{Keywords: Boolean Functions, Average-Case Lower Bounds, AC0\lbrack2\rbrack, Affine Dispersers, Affine Extractors}
}
Document
Subrank and Optimal Reduction of Scalar Multiplications to Generic Tensors

Authors: Harm Derksen, Visu Makam, and Jeroen Zuiddam

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
Since the seminal works of Strassen and Valiant it has been a central theme in algebraic complexity theory to understand the relative complexity of algebraic problems, that is, to understand which algebraic problems (be it bilinear maps like matrix multiplication in Strassen’s work, or the determinant and permanent polynomials in Valiant’s) can be reduced to each other (under the appropriate notion of reduction). In this paper we work in the setting of bilinear maps and with the usual notion of reduction that allows applying linear maps to the inputs and output of a bilinear map in order to compute another bilinear map. As our main result we determine precisely how many independent scalar multiplications can be reduced to a given bilinear map (this number is called the subrank, and extends the concept of matrix diagonalization to tensors), for essentially all (i.e. generic) bilinear maps. Namely, we prove for a generic bilinear map T : V × V → V where dim(V) = n that θ(√n) independent scalar multiplications can be reduced to T. Our result significantly improves on the previous upper bound from the work of Strassen (1991) and Bürgisser (1990) which was n^{2/3 + o(1)}. Our result is very precise and tight up to an additive constant. Our full result is much more general and applies not only to bilinear maps and 3-tensors but also to k-tensors, for which we find that the generic subrank is θ(n^{1/(k-1)}). Moreover, as an application we prove that the subrank is not additive under the direct sum. The subrank plays a central role in several areas of complexity theory (matrix multiplication algorithms, barrier results) and combinatorics (e.g., the cap set problem and sunflower problem). As a consequence of our result we obtain several large separations between the subrank and tensor methods that have received much interest recently, notably the slice rank (Tao, 2016), analytic rank (Gowers-Wolf, 2011; Lovett, 2018; Bhrushundi-Harsha-Hatami-Kopparty-Kumar, 2020), geometric rank (Kopparty-Moshkovitz-Zuiddam, 2020), and G-stable rank (Derksen, 2020). Our proofs of the lower bounds rely on a new technical result about an optimal decomposition of tensor space into structured subspaces, which we think may be of independent interest.

Cite as

Harm Derksen, Visu Makam, and Jeroen Zuiddam. Subrank and Optimal Reduction of Scalar Multiplications to Generic Tensors. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 9:1-9:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{derksen_et_al:LIPIcs.CCC.2022.9,
  author =	{Derksen, Harm and Makam, Visu and Zuiddam, Jeroen},
  title =	{{Subrank and Optimal Reduction of Scalar Multiplications to Generic Tensors}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{9:1--9:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.9},
  URN =		{urn:nbn:de:0030-drops-165716},
  doi =		{10.4230/LIPIcs.CCC.2022.9},
  annote =	{Keywords: tensors, bilinear maps, complexity, subrank, diagonalization, generic tensors, random tensors, reduction, slice rank}
}
Document
On the Partial Derivative Method Applied to Lopsided Set-Multilinear Polynomials

Authors: Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We make progress on understanding a lower bound technique that was recently used by the authors to prove the first superpolynomial constant-depth circuit lower bounds against algebraic circuits. More specifically, our previous work applied the well-known partial derivative method in a new setting, that of lopsided set-multilinear polynomials. A set-multilinear polynomial P ∈ 𝔽[X_1,…,X_d] (for disjoint sets of variables X_1,…,X_d) is a linear combination of monomials, each of which contains one variable from X_1,…,X_d. A lopsided space of set-multilinear polynomials is one where the sets X_1,…,X_d are allowed to have different sizes (we use the adjective "lopsided" to stress this feature). By choosing a suitable lopsided space of polynomials, and using a suitable version of the partial-derivative method for proving lower bounds, we were able to prove constant-depth superpolynomial set-multilinear formula lower bounds even for very low-degree polynomials (as long as d is a growing function of the number of variables N). This in turn implied lower bounds against general formulas of constant-depth. A priori, there is nothing stopping these techniques from giving us lower bounds against algebraic formulas of any depth. We investigate the extent to which this lower bound can extend to greater depths. We prove the following results. 1) We observe that our choice of the lopsided space and the kind of partial-derivative method used can be modeled as the choice of a multiset W ⊆ [-1,1] of size d. Our first result completely characterizes, for any product-depth Δ, the best lower bound we can prove for set-multilinear formulas of product-depth Δ in terms of some combinatorial properties of W, that we call the depth-Δ tree bias of W. 2) We show that the maximum depth-3 tree bias, over multisets W of size d, is Θ(d^{1/4}). This shows a stronger formula lower bound of N^{Ω(d^{1/4})} for set-multilinear formulas of product-depth 3, and also puts a non-trivial constraint on the best lower bounds we can hope to prove at this depth in this framework (a priori, we could have hoped to prove a lower bound of N^{Ω(Δ d^{1/Δ})} at product-depth Δ). 3) Finally, we show that for small Δ, our proof technique cannot hope to prove lower bounds of the form N^{Ω(d^{1/poly(Δ)})}. This seems to strongly hint that new ideas will be required to prove lower bounds for formulas of unbounded depth.

Cite as

Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas. On the Partial Derivative Method Applied to Lopsided Set-Multilinear Polynomials. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 32:1-32:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{limaye_et_al:LIPIcs.CCC.2022.32,
  author =	{Limaye, Nutan and Srinivasan, Srikanth and Tavenas, S\'{e}bastien},
  title =	{{On the Partial Derivative Method Applied to Lopsided Set-Multilinear Polynomials}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{32:1--32:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.32},
  URN =		{urn:nbn:de:0030-drops-165942},
  doi =		{10.4230/LIPIcs.CCC.2022.32},
  annote =	{Keywords: Partial Derivative Method, Barriers to Lower Bounds}
}
  • Refine by Author
  • 9 Kopparty, Swastik
  • 4 Saraf, Shubhangi
  • 2 Ben-Sasson, Eli
  • 2 Derksen, Harm
  • 2 Kumar, Mrinal
  • Show More...

  • Refine by Classification
  • 6 Theory of computation → Error-correcting codes
  • 6 Theory of computation → Pseudorandomness and derandomization
  • 5 Theory of computation → Algebraic complexity theory
  • 4 Theory of computation → Circuit complexity
  • 3 Mathematics of computing → Coding theory
  • Show More...

  • Refine by Keyword
  • 3 Reed-Solomon codes
  • 2 Affine
  • 2 Condensers
  • 2 Randomness Extractors
  • 2 Relaxed locally decodable codes
  • Show More...

  • Refine by Type
  • 26 document

  • Refine by Publication Year
  • 9 2024
  • 5 2020
  • 5 2022
  • 2 2019
  • 2 2023
  • Show More...