20 Search Results for "Shaltiel, Ronen"


Document
RANDOM
Randomness Extractors in AC⁰ and NC¹: Optimal up to Constant Factors

Authors: Kuan Cheng and Ruiyang Wu

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study randomness extractors in AC⁰ and NC¹. For the AC⁰ setting, we give a logspace-uniform construction such that for every k ≥ n/poly log n, ε ≥ 2^{-poly log n}, it can extract from an arbitrary (n, k) source, with a small constant fraction entropy loss, and the seed length is O(log n/(ε)). The seed length and output length are optimal up to constant factors matching the parameters of the best polynomial time construction such as [Guruswami et al., 2009]. The range of k and ε almost meets the lower bound in [Goldreich et al., 2015] and [Cheng and Li, 2018]. We also generalize the main lower bound of [Goldreich et al., 2015] for extractors in AC⁰, showing that when k < n/poly log n, even strong dispersers do not exist in non-uniform AC⁰. For the NC¹ setting, we also give a logspace-uniform extractor construction with seed length O(log n/(ε)) and a small constant fraction entropy loss in the output. It works for every k ≥ O(log² n), ε ≥ 2^{-O(√k)}. Our main techniques include a new error reduction process and a new output stretch process, based on low-depth circuit implementations for mergers, condensers, and somewhere extractors.

Cite as

Kuan Cheng and Ruiyang Wu. Randomness Extractors in AC⁰ and NC¹: Optimal up to Constant Factors. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 69:1-69:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cheng_et_al:LIPIcs.APPROX/RANDOM.2024.69,
  author =	{Cheng, Kuan and Wu, Ruiyang},
  title =	{{Randomness Extractors in AC⁰ and NC¹: Optimal up to Constant Factors}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{69:1--69:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.69},
  URN =		{urn:nbn:de:0030-drops-210623},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.69},
  annote =	{Keywords: randomness extractor, uniform AC⁰, error reduction, uniform NC¹, disperser}
}
Document
Communication Complexity vs Randomness Complexity in Interactive Proofs

Authors: Benny Applebaum, Kaartik Bhushan, and Manoj Prabhakaran

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
In this work, we study the interplay between the communication from a verifier in a general private-coin interactive protocol and the number of random bits it uses in the protocol. Under worst-case derandomization assumptions, we show that it is possible to transform any I-round interactive protocol that uses ρ random bits into another one for the same problem with the additional property that the verifier’s communication is bounded by O(I⋅ ρ). Importantly, this is done with a minor, logarithmic, increase in the communication from the prover to the verifier and while preserving the randomness complexity. Along the way, we introduce a new compression game between computationally-bounded compressor and computationally-unbounded decompressor and a new notion of conditioned efficient distributions that may be of independent interest. Our solutions are based on a combination of perfect hashing and pseudorandom generators.

Cite as

Benny Applebaum, Kaartik Bhushan, and Manoj Prabhakaran. Communication Complexity vs Randomness Complexity in Interactive Proofs. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 2:1-2:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.ITC.2024.2,
  author =	{Applebaum, Benny and Bhushan, Kaartik and Prabhakaran, Manoj},
  title =	{{Communication Complexity vs Randomness Complexity in Interactive Proofs}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{2:1--2:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.2},
  URN =		{urn:nbn:de:0030-drops-205103},
  doi =		{10.4230/LIPIcs.ITC.2024.2},
  annote =	{Keywords: Interactive Proof Systems, Communication Complexity, Hash Functions, Pseudo-Random Generators, Compression}
}
Document
A Technique for Hardness Amplification Against AC⁰

Authors: William M. Hoza

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We study hardness amplification in the context of two well-known "moderate" average-case hardness results for AC⁰ circuits. First, we investigate the extent to which AC⁰ circuits of depth d can approximate AC⁰ circuits of some larger depth d + k. The case k = 1 is resolved by Håstad, Rossman, Servedio, and Tan’s celebrated average-case depth hierarchy theorem (JACM 2017). Our contribution is a significantly stronger correlation bound when k ≥ 3. Specifically, we show that there exists a linear-size AC⁰_{d + k} circuit h : {0, 1}ⁿ → {0, 1} such that for every AC⁰_d circuit g, either g has size exp(n^{Ω(1/d)}), or else g agrees with h on at most a (1/2 + ε)-fraction of inputs where ε = exp(-(1/d) ⋅ Ω(log n)^{k-1}). For comparison, Håstad, Rossman, Servedio, and Tan’s result has ε = n^{-Θ(1/d)}. Second, we consider the majority function. It is well known that the majority function is moderately hard for AC⁰ circuits (and stronger classes). Our contribution is a stronger correlation bound for the XOR of t copies of the n-bit majority function, denoted MAJ_n^{⊕ t}. We show that if g is an AC⁰_d circuit of size S, then g agrees with MAJ_n^{⊕ t} on at most a (1/2 + ε)-fraction of inputs, where ε = (O(log S)^{d - 1} / √n)^t. To prove these results, we develop a hardness amplification technique that is tailored to a specific type of circuit lower bound proof. In particular, one way to show that a function h is moderately hard for AC⁰ circuits is to (a) design some distribution over random restrictions or random projections, (b) show that AC⁰ circuits simplify to shallow decision trees under these restrictions/projections, and finally (c) show that after applying the restriction/projection, h is moderately hard for shallow decision trees with respect to an appropriate distribution. We show that (roughly speaking) if h can be proven to be moderately hard by a proof with that structure, then XORing multiple copies of h amplifies its hardness. Our analysis involves a new kind of XOR lemma for decision trees, which might be of independent interest.

Cite as

William M. Hoza. A Technique for Hardness Amplification Against AC⁰. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hoza:LIPIcs.CCC.2024.1,
  author =	{Hoza, William M.},
  title =	{{A Technique for Hardness Amplification Against AC⁰}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.1},
  URN =		{urn:nbn:de:0030-drops-203977},
  doi =		{10.4230/LIPIcs.CCC.2024.1},
  annote =	{Keywords: Bounded-depth circuits, average-case lower bounds, hardness amplification, XOR lemmas}
}
Document
Explicit Directional Affine Extractors and Improved Hardness for Linear Branching Programs

Authors: Xin Li and Yan Zhong

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Affine extractors give some of the best-known lower bounds for various computational models, such as AC⁰ circuits, parity decision trees, and general Boolean circuits. However, they are not known to give strong lower bounds for read-once branching programs (ROBPs). In a recent work, Gryaznov, Pudlák, and Talebanfard (CCC' 22) introduced a stronger version of affine extractors known as directional affine extractors, together with a generalization of ROBPs where each node can make linear queries, and showed that the former implies strong lower bound for a certain type of the latter known as strongly read-once linear branching programs (SROLBPs). Their main result gives explicit constructions of directional affine extractors for entropy k > 2n/3, which implies average-case complexity 2^{n/3-o(n)} against SROLBPs with exponentially small correlation. A follow-up work by Chattopadhyay and Liao (CCC' 23) improves the hardness to 2^{n-o(n)} at the price of increasing the correlation to polynomially large, via a new connection to sumset extractors introduced by Chattopadhyay and Li (STOC' 16) and explicit constructions of such extractors by Chattopadhyay and Liao (STOC' 22). Both works left open the questions of better constructions of directional affine extractors and improved average-case complexity against SROLBPs in the regime of small correlation. This paper provides a much more in-depth study of directional affine extractors, SROLBPs, and ROBPs. Our main results include: - An explicit construction of directional affine extractors with k = o(n) and exponentially small error, which gives average-case complexity 2^{n-o(n)} against SROLBPs with exponentially small correlation, thus answering the two open questions raised in previous works. - An explicit function in AC⁰ that gives average-case complexity 2^{(1-δ)n} against ROBPs with negligible correlation, for any constant δ > 0. Previously, no such average-case hardness is known, and the best size lower bound for any function in AC⁰ against ROBPs is 2^Ω(n). One of the key ingredients in our constructions is a new linear somewhere condenser for affine sources, which is based on dimension expanders. The condenser also leads to an unconditional improvement of the entropy requirement of explicit affine extractors with negligible error. We further show that the condenser also works for general weak random sources, under the Polynomial Freiman-Ruzsa Theorem in 𝖥₂ⁿ, recently proved by Gowers, Green, Manners, and Tao (arXiv' 23).

Cite as

Xin Li and Yan Zhong. Explicit Directional Affine Extractors and Improved Hardness for Linear Branching Programs. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CCC.2024.10,
  author =	{Li, Xin and Zhong, Yan},
  title =	{{Explicit Directional Affine Extractors and Improved Hardness for Linear Branching Programs}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.10},
  URN =		{urn:nbn:de:0030-drops-204060},
  doi =		{10.4230/LIPIcs.CCC.2024.10},
  annote =	{Keywords: Randomness Extractors, Affine, Read-once Linear Branching Programs, Low-degree polynomials, AC⁰ circuits}
}
Document
A Strong Direct Sum Theorem for Distributional Query Complexity

Authors: Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Consider the expected query complexity of computing the k-fold direct product f^{⊗ k} of a function f to error ε with respect to a distribution μ^k. One strategy is to sequentially compute each of the k copies to error ε/k with respect to μ and apply the union bound. We prove a strong direct sum theorem showing that this naive strategy is essentially optimal. In particular, computing a direct product necessitates a blowup in both query complexity and error. Strong direct sum theorems contrast with results that only show a blowup in query complexity or error but not both. There has been a long line of such results for distributional query complexity, dating back to (Impagliazzo, Raz, Wigderson 1994) and (Nisan, Rudich, Saks 1994), but a strong direct sum theorem that holds for all functions in the standard query model had been elusive. A key idea in our work is the first use of the Hardcore Theorem (Impagliazzo 1995) in the context of query complexity. We prove a new resilience lemma that accompanies it, showing that the hardcore of f^{⊗k} is likely to remain dense under arbitrary partitions of the input space.

Cite as

Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan. A Strong Direct Sum Theorem for Distributional Query Complexity. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 16:1-16:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.CCC.2024.16,
  author =	{Blanc, Guy and Koch, Caleb and Strassle, Carmen and Tan, Li-Yang},
  title =	{{A Strong Direct Sum Theorem for Distributional Query Complexity}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{16:1--16:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.16},
  URN =		{urn:nbn:de:0030-drops-204123},
  doi =		{10.4230/LIPIcs.CCC.2024.16},
  annote =	{Keywords: Query complexity, direct product theorem, hardcore theorem}
}
Document
Local Enumeration and Majority Lower Bounds

Authors: Mohit Gurumukhani, Ramamohan Paturi, Pavel Pudlák, Michael Saks, and Navid Talebanfard

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Depth-3 circuit lower bounds and k-SAT algorithms are intimately related; the state-of-the-art Σ^k_3-circuit lower bound (Or-And-Or circuits with bottom fan-in at most k) and the k-SAT algorithm of Paturi, Pudlák, Saks, and Zane (J. ACM'05) are based on the same combinatorial theorem regarding k-CNFs. In this paper we define a problem which reveals new interactions between the two, and suggests a concrete approach to significantly stronger circuit lower bounds and improved k-SAT algorithms. For a natural number k and a parameter t, we consider the Enum(k, t) problem defined as follows: given an n-variable k-CNF and an initial assignment α, output all satisfying assignments at Hamming distance t(n) of α, assuming that there are no satisfying assignments of Hamming distance less than t(n) of α. We observe that an upper bound b(n, k, t) on the complexity of Enum(k, t) simultaneously implies depth-3 circuit lower bounds and k-SAT algorithms: - Depth-3 circuits: Any Σ^k_3 circuit computing the Majority function has size at least binom(n,n/2)/b(n, k, n/2). - k-SAT: There exists an algorithm solving k-SAT in time O(∑_{t=1}^{n/2}b(n, k, t)). A simple construction shows that b(n, k, n/2) ≥ 2^{(1 - O(log(k)/k))n}. Thus, matching upper bounds for b(n, k, n/2) would imply a Σ^k_3-circuit lower bound of 2^Ω(log(k)n/k) and a k-SAT upper bound of 2^{(1 - Ω(log(k)/k))n}. The former yields an unrestricted depth-3 lower bound of 2^ω(√n) solving a long standing open problem, and the latter breaks the Super Strong Exponential Time Hypothesis. In this paper, we propose a randomized algorithm for Enum(k, t) and introduce new ideas to analyze it. We demonstrate the power of our ideas by considering the first non-trivial instance of the problem, i.e., Enum(3, n/2). We show that the expected running time of our algorithm is 1.598ⁿ, substantially improving on the trivial bound of 3^{n/2} ≃ 1.732ⁿ. This already improves Σ^3_3 lower bounds for Majority function to 1.251ⁿ. The previous bound was 1.154ⁿ which follows from the work of Håstad, Jukna, and Pudlák (Comput. Complex.'95). By restricting ourselves to monotone CNFs, Enum(k, t) immediately becomes a hypergraph Turán problem. Therefore our techniques might be of independent interest in extremal combinatorics.

Cite as

Mohit Gurumukhani, Ramamohan Paturi, Pavel Pudlák, Michael Saks, and Navid Talebanfard. Local Enumeration and Majority Lower Bounds. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 17:1-17:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gurumukhani_et_al:LIPIcs.CCC.2024.17,
  author =	{Gurumukhani, Mohit and Paturi, Ramamohan and Pudl\'{a}k, Pavel and Saks, Michael and Talebanfard, Navid},
  title =	{{Local Enumeration and Majority Lower Bounds}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{17:1--17:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.17},
  URN =		{urn:nbn:de:0030-drops-204136},
  doi =		{10.4230/LIPIcs.CCC.2024.17},
  annote =	{Keywords: Depth 3 circuits, k-CNF satisfiability, Circuit lower bounds, Majority function}
}
Document
Pseudorandomness, Symmetry, Smoothing: I

Authors: Harm Derksen, Peter Ivanov, Chin Ho Lee, and Emanuele Viola

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We prove several new results about bounded uniform and small-bias distributions. A main message is that, small-bias, even perturbed with noise, does not fool several classes of tests better than bounded uniformity. We prove this for threshold tests, small-space algorithms, and small-depth circuits. In particular, we obtain small-bias distributions that - achieve an optimal lower bound on their statistical distance to any bounded-uniform distribution. This closes a line of research initiated by Alon, Goldreich, and Mansour in 2003, and improves on a result by O'Donnell and Zhao. - have heavier tail mass than the uniform distribution. This answers a question posed by several researchers including Bun and Steinke. - rule out a popular paradigm for constructing pseudorandom generators, originating in a 1989 work by Ajtai and Wigderson. This again answers a question raised by several researchers. For branching programs, our result matches a bound by Forbes and Kelley. Our small-bias distributions above are symmetric. We show that the xor of any two symmetric small-bias distributions fools any bounded function. Hence our examples cannot be extended to the xor of two small-bias distributions, another popular paradigm whose power remains unknown. We also generalize and simplify the proof of a result of Bazzi.

Cite as

Harm Derksen, Peter Ivanov, Chin Ho Lee, and Emanuele Viola. Pseudorandomness, Symmetry, Smoothing: I. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 18:1-18:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{derksen_et_al:LIPIcs.CCC.2024.18,
  author =	{Derksen, Harm and Ivanov, Peter and Lee, Chin Ho and Viola, Emanuele},
  title =	{{Pseudorandomness, Symmetry, Smoothing: I}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{18:1--18:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.18},
  URN =		{urn:nbn:de:0030-drops-204144},
  doi =		{10.4230/LIPIcs.CCC.2024.18},
  annote =	{Keywords: pseudorandomness, k-wise uniform distributions, small-bias distributions, noise, symmetric tests, thresholds, Krawtchouk polynomials}
}
Document
Track A: Algorithms, Complexity and Games
Two-Source and Affine Non-Malleable Extractors for Small Entropy

Authors: Xin Li and Yan Zhong

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Non-malleable extractors are generalizations and strengthening of standard randomness extractors, that are resilient to adversarial tampering. Such extractors have wide applications in cryptography and have become important cornerstones in recent breakthroughs of explicit constructions of two-source extractors and affine extractors for small entropy. However, explicit constructions of non-malleable extractors appear to be much harder than standard extractors. Indeed, in the well-studied models of two-source and affine non-malleable extractors, the previous best constructions only work for entropy rate > 2/3 and 1-γ for some small constant γ > 0 respectively by Li (FOCS' 23). In this paper, we present explicit constructions of two-source and affine non-malleable extractors that match the state-of-the-art constructions of standard ones for small entropy. Our main results include: - Two-source and affine non-malleable extractors (over 𝖥₂) for sources on n bits with min-entropy k ≥ log^C n and polynomially small error, matching the parameters of standard extractors by Chattopadhyay and Zuckerman (STOC' 16, Annals of Mathematics' 19) and Li (FOCS' 16). - Two-source and affine non-malleable extractors (over 𝖥₂) for sources on n bits with min-entropy k = O(log n) and constant error, matching the parameters of standard extractors by Li (FOCS' 23). Our constructions significantly improve previous results, and the parameters (entropy requirement and error) are the best possible without first improving the constructions of standard extractors. In addition, our improved affine non-malleable extractors give strong lower bounds for a certain kind of read-once linear branching programs, recently introduced by Gryaznov, Pudlák, and Talebanfard (CCC' 22) as a generalization of several well studied computational models. These bounds match the previously best-known average-case hardness results given by Chattopadhyay and Liao (CCC' 23) and Li (FOCS' 23), where the branching program size lower bounds are close to optimal, but the explicit functions we use here are different. Our results also suggest a possible deeper connection between non-malleable extractors and standard ones.

Cite as

Xin Li and Yan Zhong. Two-Source and Affine Non-Malleable Extractors for Small Entropy. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 108:1-108:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2024.108,
  author =	{Li, Xin and Zhong, Yan},
  title =	{{Two-Source and Affine Non-Malleable Extractors for Small Entropy}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{108:1--108:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.108},
  URN =		{urn:nbn:de:0030-drops-202512},
  doi =		{10.4230/LIPIcs.ICALP.2024.108},
  annote =	{Keywords: Randomness Extractors, Non-malleable, Two-source, Affine}
}
Document
Instance-Wise Hardness Versus Randomness Tradeoffs for Arthur-Merlin Protocols

Authors: Dieter van Melkebeek and Nicollas Mocelin Sdroievski

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudo-random generators, has been shown equivalent to lower bounds for decision problems against circuits. Recently, Chen and Tell (FOCS'21) established near-equivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. In this paper we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness to derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time n^a. We show that if every Arthur-Merlin protocol that runs in time n^c for c = O(log² a) can only compute f correctly on finitely many inputs, then AM is in NP. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs for nondeterministic computations. As a byproduct of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). Byproducts in the average-case setting include the first uniform hardness vs. randomness tradeoffs for AM, as well as an unconditional mild derandomization result for AM.

Cite as

Dieter van Melkebeek and Nicollas Mocelin Sdroievski. Instance-Wise Hardness Versus Randomness Tradeoffs for Arthur-Merlin Protocols. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 17:1-17:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{vanmelkebeek_et_al:LIPIcs.CCC.2023.17,
  author =	{van Melkebeek, Dieter and Mocelin Sdroievski, Nicollas},
  title =	{{Instance-Wise Hardness Versus Randomness Tradeoffs for Arthur-Merlin Protocols}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{17:1--17:36},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.17},
  URN =		{urn:nbn:de:0030-drops-182870},
  doi =		{10.4230/LIPIcs.CCC.2023.17},
  annote =	{Keywords: Hardness versus randomness tradeoff, Arthur-Merlin protocol, targeted hitting set generator}
}
Document
An Algorithmic Approach to Uniform Lower Bounds

Authors: Rahul Santhanam

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
We propose a new family of circuit-based sampling tasks, such that non-trivial algorithmic solutions to certain tasks from this family imply frontier uniform lower bounds such as "NP is not in uniform ACC⁰" and "NP does not have uniform polynomial-size depth-two threshold circuits". Indeed, the most general versions of our sampling tasks have implications for central open problems such as NP vs P and PSPACE vs P. We argue the soundness of our approach by showing that the non-trivial algorithmic solutions we require do follow from standard cryptographic assumptions. In addition, we give evidence that a version of our approach for uniform circuits is necessary in order to separate NP from P or PSPACE from P. We give an algorithmic characterization for the PSPACE vs P question: PSPACE ≠ P iff either E has sub-exponential time non-uniform algorithms infinitely often or there are non-trivial space-efficient solutions to our sampling tasks for uniform Boolean circuits. We show how to use our framework to capture uniform versions of known non-uniform lower bounds, as well as classical uniform lower bounds such as the space hierarchy theorem and Allender’s uniform lower bound for the Permanent. We also apply our framework to prove new lower bounds: NP does not have polynomial-size uniform AC⁰ circuits with a bottom layer of MOD 6 gates, nor does it have polynomial-size uniform AC⁰ circuits with a bottom layer of threshold gates. Our proofs exploit recently defined probabilistic time-bounded variants of Kolmogorov complexity [Zhenjian Lu et al., 2022; Halley Goldberg et al., 2022; Halley Goldberg et al., 2022].

Cite as

Rahul Santhanam. An Algorithmic Approach to Uniform Lower Bounds. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 35:1-35:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{santhanam:LIPIcs.CCC.2023.35,
  author =	{Santhanam, Rahul},
  title =	{{An Algorithmic Approach to Uniform Lower Bounds}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{35:1--35:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.35},
  URN =		{urn:nbn:de:0030-drops-183053},
  doi =		{10.4230/LIPIcs.CCC.2023.35},
  annote =	{Keywords: Probabilistic Kolmogorov complexity, sampling algorithms, uniform lower bounds}
}
Document
Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371)

Authors: Markus Bläser, Valentine Kabanets, Ronen Shaltiel, and Jacobo Torán

Published in: Dagstuhl Reports, Volume 12, Issue 9 (2023)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 2237 "Algebraic and Analytic Methods in Computational Complexity". Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Beside algebraic methods, analytic methods have been used for quite some time in theoretical computer science. These methods can also be used to solve algebraic problems as show by many recent examples in the areas of derandomization, coding theory or circuit lower bounds. These new directions were in the focus of the Dagstuhl Seminar and reflect the developments in the field since the previous Dagstuhl Seminar 18391. This Dagstuhl Seminar brought together researchers who are using a diverse array of algebraic and analytic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar played a role in educating a diverse community about the latest new techniques, spurring further progress.

Cite as

Markus Bläser, Valentine Kabanets, Ronen Shaltiel, and Jacobo Torán. Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371). In Dagstuhl Reports, Volume 12, Issue 9, pp. 41-59, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{blaser_et_al:DagRep.12.9.41,
  author =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Shaltiel, Ronen and Tor\'{a}n, Jacobo},
  title =	{{Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371)}},
  pages =	{41--59},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{12},
  number =	{9},
  editor =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Shaltiel, Ronen and Tor\'{a}n, Jacobo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.12.9.41},
  URN =		{urn:nbn:de:0030-drops-178092},
  doi =		{10.4230/DagRep.12.9.41},
  annote =	{Keywords: (de-)randomization, algebra, circuits, coding, computational complexity}
}
Document
Incompressiblity and Next-Block Pseudoentropy

Authors: Iftach Haitner, Noam Mazor, and Jad Silbak

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
A distribution is k-incompressible, Yao [FOCS '82], if no efficient compression scheme compresses it to less than k bits. While being a natural measure, its relation to other computational analogs of entropy such as pseudoentropy, Hastad, Impagliazzo, Levin, and Luby [SICOMP '99], and to other cryptographic hardness assumptions, was unclear. We advance towards a better understating of this notion, showing that a k-incompressible distribution has (k-2) bits of next-block pseudoentropy, a refinement of pseudoentropy introduced by Haitner, Reingold, and Vadhan [SICOMP '13]. We deduce that a samplable distribution X that is (H(X)+2)-incompressible, implies the existence of one-way functions.

Cite as

Iftach Haitner, Noam Mazor, and Jad Silbak. Incompressiblity and Next-Block Pseudoentropy. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 66:1-66:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{haitner_et_al:LIPIcs.ITCS.2023.66,
  author =	{Haitner, Iftach and Mazor, Noam and Silbak, Jad},
  title =	{{Incompressiblity and Next-Block Pseudoentropy}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{66:1--66:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.66},
  URN =		{urn:nbn:de:0030-drops-175697},
  doi =		{10.4230/LIPIcs.ITCS.2023.66},
  annote =	{Keywords: incompressibility, next-block pseudoentropy, sparse languages}
}
Document
On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization

Authors: Ronen Shaltiel and Emanuele Viola

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r → {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < β < 1 < B, and functions computable in time 2^{B ⋅ n} that cannot be computed by circuits of size 2^{β ⋅ n}. Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))⋅ log m, under qualitatively stronger assumptions. We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which β = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer: - The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor). To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions. - The construction of Chen and Tell composes two PRGs: G₁:{0,1}^{(1+o(1)) ⋅ log m} → {0,1}^{r₂ = m^{Ω(1)}} and G₂:{0,1}^{r₂} → {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make Ω(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G₂ from the extreme high-end hardness assumption. The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms.

Cite as

Ronen Shaltiel and Emanuele Viola. On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 116:1-116:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{shaltiel_et_al:LIPIcs.ITCS.2022.116,
  author =	{Shaltiel, Ronen and Viola, Emanuele},
  title =	{{On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{116:1--116:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.116},
  URN =		{urn:nbn:de:0030-drops-157122},
  doi =		{10.4230/LIPIcs.ITCS.2022.116},
  annote =	{Keywords: Complexity Theory, Derandomization, Pseudorandom generators, Black-box proofs}
}
Document
Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)

Authors: Noga Ron-Zewi, Ronen Shaltiel, and Nithin Varma

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
A binary code Enc:{0,1}^k → {0,1}ⁿ is (1/2-ε,L)-list decodable if for every w ∈ {0,1}ⁿ, there exists a set List(w) of size at most L, containing all messages m ∈ {0,1}^k such that the relative Hamming distance between Enc(m) and w is at most 1/2-ε. A q-query local list-decoder for Enc is a randomized procedure Dec that when given oracle access to a string w, makes at most q oracle calls, and for every message m ∈ List(w), with high probability, there exists j ∈ [L] such that for every i ∈ [k], with high probability, Dec^w(i,j) = m_i. We prove lower bounds on q, that apply even if L is huge (say L = 2^{k^{0.9}}) and the rate of Enc is small (meaning that n ≥ 2^{k}): - For ε = 1/k^{ν} for some constant 0 < ν < 1, we prove a lower bound of q = Ω(log(1/δ)/ε²), where δ is the error probability of the local list-decoder. This bound is tight as there is a matching upper bound by Goldreich and Levin (STOC 1989) of q = O(log(1/δ)/ε²) for the Hadamard code (which has n = 2^k). This bound extends an earlier work of Grinberg, Shaltiel and Viola (FOCS 2018) which only works if n ≤ 2^{k^ν} and the number of coins tossed by Dec is small (and therefore does not apply to the Hadamard code, or other codes with low rate). - For smaller ε, we prove a lower bound of roughly q = Ω(1/(√ε)). To the best of our knowledge, this is the first lower bound on the number of queries of local list-decoders that gives q ≥ k for small ε. Local list-decoders with small ε form the key component in the celebrated theorem of Goldreich and Levin that extracts a hard-core predicate from a one-way function. We show that black-box proofs cannot improve the Goldreich-Levin theorem and produce a hard-core predicate that is hard to predict with probability 1/2 + 1/𝓁^ω(1) when provided with a one-way function f:{0,1}^𝓁 → {0,1}^𝓁, where f is such that circuits of size poly(𝓁) cannot invert f with probability ρ = 1/2^√𝓁 (or even ρ = 1/2^Ω(𝓁)). This limitation applies to any proof by black-box reduction (even if the reduction is allowed to use nonuniformity and has oracle access to f).

Cite as

Noga Ron-Zewi, Ronen Shaltiel, and Nithin Varma. Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 33:1-33:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ronzewi_et_al:LIPIcs.ITCS.2021.33,
  author =	{Ron-Zewi, Noga and Shaltiel, Ronen and Varma, Nithin},
  title =	{{Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{33:1--33:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.33},
  URN =		{urn:nbn:de:0030-drops-135724},
  doi =		{10.4230/LIPIcs.ITCS.2021.33},
  annote =	{Keywords: Local list-decoding, Hard-core predicates, Black-box reduction, Hadamard code}
}
Document
RANDOM
Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?

Authors: Ronen Shaltiel

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
Yao’s XOR lemma states that for every function f:{0,1}^k → {0,1}, if f has hardness 2/3 for P/poly (meaning that for every circuit C in P/poly, Pr[C(X) = f(X)] ≤ 2/3 on a uniform input X), then the task of computing f(X₁) ⊕ … ⊕ f(X_t) for sufficiently large t has hardness 1/2 +ε for P/poly. Known proofs of this lemma cannot achieve ε = 1/k^ω(1), and even for ε = 1/k, we do not know how to replace P/poly by AC⁰[parity] (the class of constant depth circuits with the gates {and,or,not,parity} of unbounded fan-in). Recently, Grinberg, Shaltiel and Viola (FOCS 2018) (building on a sequence of earlier works) showed that these limitations cannot be circumvented by black-box reductions. Namely, by reductions Red^(⋅) that given oracle access to a function D that violates the conclusion of Yao’s XOR lemma, implement a circuit that violates the assumption of Yao’s XOR lemma. There are a few known reductions in the related literature on worst-case to average case reductions that are non-black box. Specifically, the reductions of Gutfreund, Shaltiel and Ta Shma (Computational Complexity 2007) and Hirahara (FOCS 2018)) are "class reductions" that are only guaranteed to succeed when given oracle access to an oracle D from some efficient class of algorithms. These works seem to circumvent some black-box impossibility results. In this paper we extend the previous limitations of Grinberg, Shaltiel and Viola to class reductions, giving evidence that class reductions cannot yield the desired improvements in Yao’s XOR lemma. To the best of our knowledge, this is the first limitation on reductions for hardness amplification that applies to class reductions. Our technique imitates the previous lower bounds for black-box reductions, replacing the inefficient oracle used in that proof, with an efficient one that is based on limited independence, and developing tools to deal with the technical difficulties that arise following this replacement.

Cite as

Ronen Shaltiel. Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{shaltiel:LIPIcs.APPROX/RANDOM.2020.10,
  author =	{Shaltiel, Ronen},
  title =	{{Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.10},
  URN =		{urn:nbn:de:0030-drops-126138},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.10},
  annote =	{Keywords: Yao’s XOR lemma, Hardness amplification, black-box reductions}
}
  • Refine by Author
  • 7 Shaltiel, Ronen
  • 3 Kabanets, Valentine
  • 2 Applebaum, Benny
  • 2 Artemenko, Sergei
  • 2 Impagliazzo, Russell
  • Show More...

  • Refine by Classification
  • 7 Theory of computation → Circuit complexity
  • 7 Theory of computation → Pseudorandomness and derandomization
  • 3 Theory of computation → Computational complexity and cryptography
  • 2 Theory of computation
  • 2 Theory of computation → Expander graphs and randomness extractors
  • Show More...

  • Refine by Keyword
  • 2 Affine
  • 2 Derandomization
  • 2 Pseudorandomness
  • 2 Randomness Extractors
  • 2 pseudorandomness
  • Show More...

  • Refine by Type
  • 20 document

  • Refine by Publication Year
  • 8 2024
  • 4 2023
  • 2 2016
  • 2 2019
  • 1 2015
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail